Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Role: Senior Analyst - Data Science Experience: 3 to 8 years Location: Chennai Job Description: Responsibilities: Extract, clean, and analyze large datasets using SQL and Python. Develop statistical models to identify patterns and trends in data. Apply machine learning (supervised / unsupervised) algorithms to solve business problems. Design and execute data-driven experiments to optimize performance metrics. Build scalable, automated processes for data analysis and predictive modeling. Create model pipelines using AWS Sagemaker Communicate findings effectively to both technical and non-technical stakeholders. Requirements: Proficiency in SQL, Python, Statistics, and Machine Learning techniques. Strong experience in data wrangling, feature engineering, and model evaluation. Expertise in supervised and unsupervised learning algorithms. Knowledge of data visualization tools and techniques. Excellent problem-solving and communication skills. Skills: SQL, Python, Statistics, Machine Learning, Data Science, Strong communication skills AWS Sagemaker, MLOps Job Snapshot Updated Date 22-05-2025 Job ID J_3620 Location Chennai, Tamil Nadu, India Experience 3 - 8 Years Employee Type Permanent
Posted 3 weeks ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 3 weeks ago
2.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycle—from data preparation and model training to deployment and optimization—with a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Deploy AI models in cloud-native production environments using AWS services (e.g., SageMaker, Lambda, Bedrock). Containerize and orchestrate deployments with Docker and Kubernetes . Continuously monitor model performance and improve accuracy, efficiency, and scalability. Collaborate with cross-functional teams to ensure seamless integration and delivery of AI capabilities. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying NLP or Generative AI models. Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: We are seeking a skilled Data Scientist to join our team, with expertise in AWS cloud services , Big Data technologies , and AI/ML solutions , including Generative AI . The ideal candidate will be responsible for analyzing large datasets, building predictive models, and delivering insights to drive business decisions. Key Responsibilities :Design, develop, and deploy machine learning models using AWS services (e.g., SageMaker, Redshift, EMR) .Work with Big Data tools such as Hadoo p, Spar k, and Hiv e to process and analyze large-scale datasets .Collaborate with data engineers, product managers, and stakeholders to translate business needs into data-driven solutions .Build and optimize data pipelines for model training and evaluation .Stay current with the latest trends in AI/ML and cloud technologies . Requirement s:Proven experience wit h AWS clo u d.Strong background i n Big Da ta and distributed computing (Hadoop/Spark ). Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role – AIML Data Scientist Location : Chennai Mode of Interview - In Person Date : 7th June 2025 (Saturday) Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary We are seeking a skilled AI Product Owner / Business Analyst to lead the design, development, and delivery of AI-powered solutions. The ideal candidate will bridge the gap between business needs and technical execution, working closely with cross-functional teams to drive product strategy and execution. Key Responsibilities Define and prioritize AI/ML product features and roadmaps in alignment with business goals. Translate business requirements into clear, actionable user stories and acceptance criteria. Collaborate with data scientists, engineers, and stakeholders to ensure product feasibility and success. Conduct market, user, and data analysis to guide product direction. Own the product backlog, sprint planning, and delivery lifecycle. Monitor KPIs and performance metrics to assess the impact of AI solutions. Required Skills & Qualifications 6–10 years of experience in product ownership or business analysis, with 3+ years in AI/ML initiatives. Strong understanding of AI/ML technologies, NLP, predictive analytics, and data pipelines. Experience with Agile methodologies, Jira, and product management tools. Excellent communication, stakeholder management, and problem-solving skills. Bachelor’s or Master’s degree in Computer Science, Engineering, Business, or a related field. Preferred Qualifications Experience working with AI platforms (Azure ML, AWS SageMaker, Google Vertex AI). Familiarity with regulatory and ethical considerations in AI development. Show more Show less
Posted 3 weeks ago
0.0 - 2.0 years
0 Lacs
Salem, Tamil Nadu
On-site
About The Role: As a Subject Matter Expert (SME) in Data Annotation, you will play a critical role in ensuring the highest quality of data labelling across various projects. Technical and Domain expert Mentor annotation teams Establish annotation guidelines Conduct quality audits Support client and internal teams with domain-specific insights. Tools Experience Expected: CVAT, Amazon SageMaker, BasicAI, LabelStudio, SuperAnnotate, Loft, Cogito, Roboflow, Slicer3D, Mindkosh, Kognic, Praat Annotation Expertise Areas: Image, Video: Bounding Box, Polygon, Semantic Segmentation, Keypoints 3D Point Cloud: LiDAR Annotation, 3D Cuboids, Semantic Segmentation Audio Annotation: Speech, Noise Labelling, Transcription Text Annotation: NER, Sentiment Analysis, Intent Detection, NLP tasks Exposure to LLMs and Generative AI data annotation tasks (prompt generation, evaluation) Key Responsibilities: Act as a Subject Matter Expert to guide annotation standards, processes, and best practices. Create, refine, and maintain detailed annotation guidelines and ensure adherence across teams. Conduct quality audits and reviews to maintain high annotation accuracy and consistency. Provide domain-specific training to Data Annotators and Team Leads. Collaborate closely with Project Managers, Data Scientists, and Engineering teams for dataset quality assurance. Resolve complex annotation issues and edge cases with data-centric solutions. Stay current with advancements in AI/ML and annotation technologies and apply innovative methods. Support pre-sales and client discussions as an annotation domain expert, when required. Key Performance Indicators (KPIs): Annotation quality and consistency across projects Successful training and upskilling of annotation teams Timely resolution of annotation queries and technical challenges Documentation of guidelines, standards Client satisfaction on annotation quality benchmarks Qualifications: Bachelor's or master's degree in a relevant field (Computer Science, AI/ML, Data Science, Linguistics, Engineering, etc.) 3–6 years of hands-on experience in data annotation, with exposure to multiple domains (vision, audio, text, 3D). Deep understanding of annotation processes, tool expertise, and quality standards. Prior experience in quality control, QA audits, or SME role in annotation projects. Strong communication skills to deliver training, documentation, and client presentations. Familiarity with AI/ML workflows, data preprocessing, and dataset management concepts is highly desirable. Work Location: In-person (Salem, Tamil Nadu) Schedule: Day Shift Monday to Saturday Weekend availability required Supplemental Pay: Overtime pays Performance bonus Shift allowance Yearly bonus INTERVIEW TYPE: Walkin(293/4, MG Rd, New Fairlands, Salem-636016) Contact:9489979523(HR) Walkin timings: 22nd May,2025(2.30pm-6pm),23rd May,2025(9.00am-6pm) Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Schedule: Day shift Experience: data annotation: 2 years (Preferred) Work Location: In person
Posted 3 weeks ago
0.0 - 8.0 years
0 Lacs
Delhi
On-site
Job Title: Software Engineer – AI/ML Location: Delhi Experience: 4-8 years About the Role: We are seeking a highly experienced and innovative AI & ML engineer to lead the design, development, and deployment of advanced AI/ML solutions, including Large Language Models (LLMs), for enterprise-grade applications. You will work closely with cross-functional teams to drive AI strategy, define architecture, and ensure scalable and efficient implementation of intelligent systems. Key Responsibilities: Design and architect end-to-end AI/ML solutions including data pipelines, model development, training, and deployment. Develop and implement ML models for classification, regression, NLP, computer vision, and recommendation systems. Build, fine-tune, and integrate Large Language Models (LLMs) such as GPT, BERT, LLaMA, etc., into enterprise applications. Evaluate and select appropriate frameworks, tools, and technologies for AI/ML projects. Lead AI experimentation, proof-of-concepts (PoCs), and model performance evaluations. Collaborate with data engineers, product managers, and software developers to integrate models into production environments. Ensure robust MLOps practices, version control, reproducibility, and model monitoring. Stay up to date with advancements in AI/ML, especially in generative AI and LLMs, and apply them innovatively. Requirements : Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field. Min 4+ years of experience in AI/ML. Deep understanding of machine learning algorithms, neural networks, and deep learning architectures. Proven experience working with LLMs, transformer models, and prompt engineering. Hands-on experience with ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, etc. Proficiency in Python and experience with cloud platforms (AWS, Azure, or GCP) for ML workloads. Strong knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, etc.) and practices. Excellent problem-solving and communication skills. Preferred Qualifications: Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and embeddings. Exposure to real-time AI systems, streaming data, or edge AI. Contributions to AI research, open-source projects, or publications in AI/ML. Interested ones, kindly apply here or share resume at hr@softprodigy.com
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Generative AI Engineer Role Overview We are looking for a highly skilled Generative AI Engineer with deep experience in building and deploying AI agent-based systems, Retrieval-Augmented Generation (RAG) frameworks, and working with large language models (LLMs). The ideal candidate will bring hands-on experience with AWS Bedrock and other Gen AI platforms, with a proven ability to fine-tune foundation models and develop domain-specific LLMs. This role offers the opportunity to work at the forefront of applied AI, solving real-world problems through intelligent, adaptive, and scalable AI solutions. Key Responsibilities AI Agent Development: Design and implement multi-agent AI systems capable of orchestrating reasoning, planning, and task execution using LLMs. RAG Frameworks: Build and optimize Retrieval-Augmented Generation pipelines using vector databases and LLMs to improve contextual understanding and response generation. LLM Fine-Tuning & Customization: Fine-tune and customize foundation models (e.g., Claude, Llama, Titan, Falcon) for enterprise-specific use cases and domains. Infrastructure Integration: Leverage AWS services (including Bedrock, SageMaker, Lambda, Step Functions) to scale and deploy generative AI solutions efficiently and securely. Model Evaluation & Governance: Define accuracy metrics, evaluate model performance, and ensure compliance with security, privacy, and ethical standards. Collaboration and team management: Work closely with product managers, data scientists, MLOps engineers, and domain SMEs to translate business needs into scalable Gen AI applications. Required Skills Strong experience with Gen AI ecosystems – including Bedrock, HuggingFace, LangChain, or similar. Proven expertise in LLM fine-tuning, prompt engineering, and building domain-specific models. Hands-on experience with agentic AI systems and RAG architecture (e.g., using tools like LangGraph, Haystack, or DSPy). Solid programming experience in Python and frameworks such as PyTorch or TensorFlow. Experience with vector databases (e.g., FAISS, Pinecone, Weaviate) for retrieval components. Familiarity with AWS AI/ML stack, including security and deployment best practices. Preferred Qualifications Master’s or PhD in Computer Science, Machine Learning, or a related field. Minimum 5 years of experience in building AI/ML projects Experience building AI-powered copilots, assistants, or autonomous agents in real-world production environments. Knowledge of LLM evaluation techniques, model distillation, and optimization. Strong understanding of LLM limitations, hallucinations, and guardrails. Excellent problem-solving and communication skills, with the ability to convey complex AI concepts to non-technical audiences. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job title : Staff Data Engineer About Trimble Trimble is a leading provider of advanced positioning solutions that maximize productivity and enhance profitability for our customers. We are an exciting, entrepreneurial company, with a history of exceptional growth coupled with a disciplined and strategic focus on being the best. While GPS is at our core, we have grown beyond this technology to embrace other sophisticated positioning technologies and, in doing so, we are changing the way the world works. Those who successfully lead others to meet our objectives are vital to our organization. Leadership at Trimble is much more than simply exercising assigned authority; we expect our leaders to embrace a mission-focused leadership style, demonstrating the strength of character, intellect and the ability to convert ideas to reality. www.trimble.com Job Summary We are looking for a highly skilled Staff Engineer for our Data and Cloud Engineering team with expertise in AWS and Azure. The ideal candidate will have a strong technical background in designing, building, developing, and implementing data pipelines and cloud solutions along with excellent technical guidance and communication skills. This role requires a strong technical background in cloud platforms, data architecture, and engineering best practices. Key Responsibilities Lead the design and implement robust, scalable, and secure cloud-based data pipelines and architectures in MS Azure. Provide technical direction and mentorship to a team of engineers Ensure best practices in code quality, architecture, and design. Design and implement secure, scalable, and high-performance cloud infrastructure. Manage cloud resources, optimize costs, ensure high availability and disaster recovery. Automate infrastructure provisioning and deployment processes using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and ARM templates. Collaborate with cross-functional teams to understand data needs and deliver comprehensive cloud solutions. Oversee cloud infrastructure management, including monitoring, maintenance, and scaling of cloud resources. Ensure compliance with industry standards and regulatory requirements. Implement data governance policies and practices and ensure high data quality, integrity, and security across all cloud platforms. Develop and enforce best practices for cloud and data engineering, including documentation and code standards. Oversee the design and development of robust data pipelines and ETL processes. Identify and implement process improvements to enhance efficiency, quality, and scalability of data engineering and cloud operations. Troubleshoot and resolve complex technical issues related to data pipelines and cloud infrastructure. Stay current with emerging technologies and industry trends to drive innovation. Tech-stack Infrastructure: Glue, Lambda, Step Function, Batch, ECS, Quicksight, Machine Learning, Sagemaker, Dagster DevOps: Cloudformation, Terraform, Git, CodeBuild Database: Redshift, PostgreSQL, DynamoDB, Athena (Trino), Snowflake, Databricks Language: Bash, Python (PySpark, Pydantic, PyArrow), SQL Qualifications Min. 6 years of proven experience as a senior data and cloud engineer or similar role. Extensive experience with AWS and Azure cloud platforms and their data services (e.g., AWS Redshift, AWS Glue, AWS S3, Azure Data LakeAzure Synapse, Snowflake, Databricks). Strong understanding of ETL processes, data warehousing, and big data technologies. Proficiency in SQL, Python, and other relevant programming languages. Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, or ARM templates. Knowledge of containerization and orchestration (e.g., Docker, Kubernetes). Understanding of cloud cost management and optimization strategies. Familiarity with CI/CD pipelines and DevOps practices. Excellent leadership, communication, and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Strong problem-solving and analytical skills. Familiarity with data visualization tools (e.g., Power BI, QuickSight) is a plus. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Company Description Prakash Software Solutions Pvt Ltd (PSSPL), founded in 2001, is a globally recognized software development consultancy in the IT space. As a certified Microsoft Solution Partner for Data & AI and Digital & App Innovation (Azure), and an ISO 9001: 2015 & ISO 27001: 2022 certified company, PSSPL has developed over 500 custom B2B and B2C applications for diverse industries. We provide end-to-end mobile and web development, UI/UX design, cloud-based solutions, AR, VR, AI, Big Data, and IoT solutions, as well as advanced quality assurance and dedicated Agile teams. Our focus is on delivering high-quality projects through frequent and open communication and collaboration with our clients. Role Description Experience: 4+ yrs. Skills required: Technical knowledge on Azure, AWS, Python, Jupyter Notebook, COLAB, ChatGPT, Jurrasic-1, Matplotlib, Sklearn, PyTorch, spacy, HuggingFace, SpeechBrain, Wave2Letter, ElasticSearch, AWS Transcribe, Textract, Amazon Sagemaker Natural Language Processing (NLP), Voice App Development, Data Structures & Algorithms, Web Development, Machine Learning, Deep Learning, Natural Language Processing, Data Science, Tableau, SAS Programming, SQL for Data Analytics, Power BI, Clinical Trial Analysis & Reporting, Git & GitHub Location: Ahmedabad/Vadodara Soft Skills: Leadership Skills, Managerial Skills, Communication Skills, Presentation Skills, Analytical & Logical Skills, Team building & Client relationship management What you will do: Project Planning and Management: Develop and execute comprehensive project plans, including scope, timelines, resources, and budget allocation. Define project milestones and deliverables, ensuring adherence to project management best practices throughout the project lifecycle. Requirement Analysis: Collaborate with clients and stakeholders to gather and analyse business requirements. Translate these requirements into technical specifications and project deliverables. Identify any customization or integration needs and define the technical approach accordingly. Solution Architecture: Create comprehensive AI/ML system designs, including architecture, data pipelines, algorithms, and model selection. Ensure that the solution aligns with business goals and is scalable and maintainable. Evaluate and select the most suitable AI/ML technologies, frameworks, and tools for each project. Stay up to date with emerging AI/ML trends and technologies. Team Coordination: Lead and manage cross-functional project teams, including developers, consultants, and other stakeholders. Define roles and responsibilities, assign tasks, and provide guidance and support throughout the project lifecycle. Foster effective collaboration and communication among team members. Risk and Issue Management: Identify and proactively mitigate project risks and issues. Develop contingency plans to address any potential obstacles that may arise during the project. Monitor project progress, track project metrics, and communicate project status to stakeholders, ensuring transparency and timely reporting. Quality Assurance: Establish and enforce quality standards and best practices throughout the project. Conduct regular quality reviews and ensure adherence to coding standards, testing protocols, and documentation requirements. Perform thorough testing and coordinate user acceptance testing (UAT) activities. Client Relationship Management: Build and maintain strong relationships with clients and stakeholders. Provide regular project updates, manage expectations, and address any concerns or issues that arise. Ensure a high level of client satisfaction by delivering projects that meet or exceed their expectations. Continuous Improvement: AI/ML technologies are rapidly evolving. Stay current with the latest research, frameworks, algorithms, and best practices in AI/ML. Subscribe to relevant journals, blogs, and attend conferences and workshops. Invest in ongoing education and training for your AI/ML professionals. Provide opportunities for them to acquire new skills and certifications, attend training programs, and participate in online courses. Requirements: Bachelor’s degree in computer science, Information Technology, or a related field (master’s degree preferred). Practical experience in at least one of Java, Python, or JavaScript. Practical experience with at least one of Spring, Flask, Django, or Node.js. 3+ years of experience. Effective communication skills including uplevelling communication for various leadership levels. Experience moving technical or engineering programs and products from inception to delivery, articulating the impact using metrics. Collaborate with the team in solutioning, design, and code reviews. Analytical and problem-solving experience with large-scale systems. Experience establishing work relationships across multi-disciplinary teams working remotely. Interpersonal skills, including relationship building and collaboration within a diverse, cross-functional team to develop solutions. Organizational and coordination skills along with multitasking experience to get things done in an ambiguous and fast paced environment. Analytical mindset with the ability to identify and mitigate risks, solve problems, and make data-driven decisions. Strong organizational skills and attention to detail, with the ability to manage multiple projects simultaneously. Acquire, clean, and preprocess data from various sources to prepare it for analysis and modelling. Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies in the data. Design and implement data pipelines for efficient data processing and model training. Develop and maintain documentation for data processes, datasets, and models. Collaborate with data scientists to design and evaluate machine learning models. Monitor and assess the performance of machine learning models and make recommendations for improvements. Stay up to date with industry trends and best practices in AI/ML and data analysis. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Title: Infrastructure & SRE Lead Location: Bangalore (On-site) Role Overview We’re hiring an Infrastructure & AIOps Lead to champion the reliability, scalability, and cost-efficiency of our AWS platform, observability stack, and data warehouse. In this role, you’ll work hand-in-hand with backend, AI, and analytics teams (including mentoring our DevOps and Data-Ops engineers) to build AI-infused automation assistants, define and maintain runbooks, enforce SLOs, and continuously optimize both application infrastructure and our Redshift/Metabase data platform. You’ll leverage AI-assisted coding tools to accelerate routine ops workflows, own Terraform-driven deployments, and partner with stakeholders across product and engineering to keep our systems robust at scale. Key Responsibilities Cloud Infrastructure & Automation Take an automation-first approach to building an AI DevOps agent that accelerates MTTD and MTTR Design and maintain Terraform-based IaC for AWS resources (ECS, VPCs, RDS, SageMaker) and manage MongoDB Atlas clusters Optimize cost and performance through right-sizing, reserved instances, autoscaling, and continuous infrastructure reviews. On-call Reliability & Incident Management Serve as the primary PagerDuty escalation lead; refine alert rules and escalation policies Develop and maintain runbooks and playbooks for common incidents (database failovers, service crashes, latency spikes) Conduct post-mortems, track error budgets, and drive reliability improvements Monitoring & Observability Define SLIs/SLOs for critical services and build dashboards in NewRelic and Coralogix Instrument logging, tracing, and metrics pipelines; ensure high-fidelity alerts without noise CI/CD & Deployment Design, implement, and maintain GitHub Actions CI/CD pipelines that automate unit testing and enable continuous releases Collaborate on blue/green or canary release strategies to minimize user impact Data Platform & Analytics Support Oversee our data-ops function (Redshift data warehouse + Metabase) Ensure query performance, cost-optimization of the warehouse, and robust dashboard delivery for the analytics team Knowledge Sharing & Mentorship Mentor team members on best practices in reliability, observability, and automation Lead regular tech talks on infrastructure, security, and cost management Maintain and evolve our central runbook repository and documentation Must-Have Qualifications 5+ years of hands-on experience owning cloud infrastructure, preferably on AWS (ECS, RDS, S3, IAM, VPC) Proven track record in SRE or DevOps: on-call rotations, runbook development, incident response Strong IaC skills (Terraform, CloudFormation, or similar) and automation of CI/CD pipelines (GitHub Actions) Deep expertise in monitoring & observability (NewRelic, Coralogix) and alerting (PagerDuty) Solid understanding of container orchestration (ECS), networking, and security best practices Proficient programmer (Python or Go) capable of writing automation scripts and small tools Familiarity with AI-assisted coding workflows (e.g., GitHub Copilot, Cursor) and comfortable using AI to accelerate routine tasks Excellent communicator who thrives in a flat, high-ownership environment Nice-to-Have Experience building or integrating AI-powered automation assistants to streamline infra/data-ops workflows Hands-on practice with LLMs or AI frameworks for operational tooling (prompt engineering, embeddings, etc.) Prior involvement in ML/AI infrastructure (SageMaker, model-serving frameworks) Experience with large-scale database operations (Redshift, MongoDB Atlas) and caching (Redis) Familiarity with message queues and task runners (Celery, RabbitMQ, or similar) Prior involvement in ML/AI infrastructure (SageMaker, model-serving frameworks) Contributions to open-source DevOps/SRE tooling Show more Show less
Posted 4 weeks ago
0.0 - 3.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
We are seeking a highly skilled and motivated MEAN Stack Developer to join our International IT client's team. Key Responsibilities: Design, implement, and manage CI/CD pipelines using GitLab, Jenkins, and Bitbucket. Administer Linux servers, including networking configurations, DNS, and system troubleshooting. Maintain artifact repositories and Artifactory systems. Utilize a wide range of AWS services: EC2, S3, ECS, RDS (Postgres), Lambda (Python runtime), DynamoDB, Comprehend, Textract, and SageMaker for ML deployments. Optimize AWS resource usage for performance and cost-efficiency. Develop infrastructure using Terraform and manage Infrastructure as Code (IaC) workflows. Deploy and manage Kubernetes clusters, including EKS, and work with microservices architecture, load balancers, and database replication (Postgres, MongoDB). Hands-on experience with Redis clusters, Elasticsearch, and Amazon OpenSearch. Integrate monitoring tools such as CloudWatch, Grafana, and implement alerting solutions. Support DevOps scripting using tools like AWS CLI, Python, PowerShell, and optionally FileMaker. Implement and maintain automated troubleshooting, system health checks, and ensure maximum uptime. Collaborate with development teams to interpret test data and meet quality goals. Create system architecture diagrams and provide scalable, cost-effective solutions to clients. Implement best practices for network security, data encryption, and overall cybersecurity. Stay current with industry trends and introduce modern DevOps tools and practices. Ability to handle client interviews with strong communication. Key Skills & Requirements: 3–4 years of experience in DevOps roles. Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, Bitbucket). Proficiency with AWS cloud infrastructure, including serverless technologies. Experience with Docker, Kubernetes, and IaC tools like Terraform. Expertise in Linux systems, networking, and scripting (Python, Shell, PowerShell). Experience working with Postgres, MongoDB, and DynamoDB. Knowledge of Redis, Elasticsearch, and monitoring tools (CloudWatch, Grafana). Understanding of microservices architecture, performance optimization, and security. Preferred Qualifications: Hands-on experience with GCP and services like BigQuery, Composer, Airflow, and Pub/Sub is a plus point. Design and Experience deploying applications on Vercel. Knowledge of AWS ML and NLP services (Comprehend, Textract, SageMaker). Familiarity with streaming data platforms and real-time pipelines. AWS Certification or AWS Solutions Architect, Kubernetes certification is a strong plus. Strong leadership and cross-functional collaboration skills. Job Types: Full-time, Permanent Pay: ₹540,000.00 - ₹660,000.00 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Would you please share your Current CTC, Expected CTC and Notice Period? Experience: DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9727330030
Posted 4 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Machine Learning Engineer We’re looking for someone to deploy, automate, maintain and monitor machine learning models and algorithms to make sure they work effectively in a production environment Day-to-day, you’ll collaborate with colleagues to design and develop state-of-the-art machine learning products which power our group for our customers This is your opportunity to turn your interests into a diverse and rewarding career, as you solve new problems and create smarter solutions in a non-stop innovation environment We're offering this role at associate vice president level What you’ll do As a Machine Learning Engineer, you’ll lead the planning and design of complex projects. Your daily responsibilities will see you codifying and automating machine learning model production, including pipeline optimisation, tuning and fault finding, as well as transforming data science prototypes and applying appropriate machine learning algorithms and tools. We’ll need you to deploy and maintain adopted end-to-end solutions, including building metrics to improve system performance and identifying and resolving differences in data distribution which affect model performance. In Addition, You’ll Be Responsible For Understanding the needs of our business stakeholders, and how machine learning solutions meet those needs to support the achievement of our business strategy Working with colleagues to produce machine learning models, including pipeline designs, development, testing and deployment to carry on the intent and knowledge into production Creating frameworks to make sure the monitoring of machine learning models within the production environment is robust Delivering models that adhere to expected quality and performance while understanding and addressing any shortfalls, for example through retraining Leading and working in an Agile way within multi-disciplinary data and the analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll lead the ML solutions, feature engineering, solution designs. You’ll need experience of minimum five years with operationalising the ML solutions. Alongside this, you’ll have experience of Devops skills like CICD, Gitlab, Microservices, Flask framework You’ll also have good communication skills to engage with a wide range of stakeholders. Furthermore, You’ll Need Experience in python libraries such as NumPy, Pandas, Scikit-learn, TensorFlow and PyTorch. Experience of building, deploying, and maintaining machine learning model, work with large datasets, leverage PySpark, Python, AWS, and other data processing and cloud technologies to create end-to-end ML solutions Experience of AWS cloud services like Sagemaker, EC2, S3 and Lambda An understanding of data processing frameworks such as Apache Kafka and Apache Airflow Experience in implementing MLOps practices for CI/CD pipelines, model monitoring, lifecycle management and familiarity with containerization technologies like Docker and orchestration tools like Kubernetes Show more Show less
Posted 4 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Position Overview: We are seeking a talented Python Machine Learning Engineer with 1 to 4 years of experience to join our dynamic team. The ideal candidate will be passionate about leveraging machine learning algorithms to develop and maintain recommender systems that personalize user experiences and drive engagement. Key Responsibilities: Design, develop, and deploy scalable recommender systems using popular machine learning algorithms such as collaborative filtering, matrix factorization, and deep learning techniques. Collaborate closely with cross-functional teams to understand business requirements and translate them into actionable machine learning solutions. Conduct thorough exploratory data analysis to identify relevant features and patterns in large-scale datasets. Implement and optimize machine learning models for performance, scalability, and efficiency. Continuously monitor and evaluate model performance using relevant metrics and implement necessary improvements. Document code, methodologies, and experiment results for reproducibility and knowledge sharing purposes. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or a related field. 2 to 3 years of hands-on experience in developing machine learning models, with a focus on recommender systems. Proficiency in Python programming and popular machine learning libraries/frameworks such as TensorFlow, PyTorch, or Scikit-learn. Solid understanding of fundamental machine learning concepts including supervised and unsupervised learning, feature engineering, and model evaluation. Experience working with large-scale datasets Strong analytical and problem-solving skills with a keen attention to detail. Excellent communication and collaboration abilities, with the capacity to explain complex technical concepts to non-technical stakeholders. Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities. Working with SQL and NoSQL databases to store and retrieve training data. Writing efficient ETL pipelines to feed real-time and batch ML models using Apache Airflow. Preferred Qualifications: Experience with cloud computing platforms such as AWS (Knowledge of Redshift, Athena, RDS, Spectrum). Familiarity with recommendation system evaluation techniques such as precision, recall, and AUC-ROC. Knowledge of natural language processing (NLP) techniques and text-based recommendation systems. Contributions to open-source machine learning projects or participation in relevant competitions (e.g., Kaggle) is a plus. MLOps & Deployment (Docker, Airflow). Cloud Platforms (AWS, GCP, Azure, SageMaker). Job Type: Full-time Pay: Up to ₹1,500,000.00 per year Schedule: Day shift Experience: AWS: 1 year (Preferred) Python Advance: 1 year (Preferred) SQL and No SQL: 1 year (Preferred) Machine learning: 1 year (Preferred) Location: Banglore, Karnataka (Preferred) Work Location: In person
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: 1. *Agentic AI Systems*: Design and implement intelligent Agentic AI systems using LLMs, vector databases, and orchestration frameworks. 2. *ML Workflows*: Build, deploy, and maintain ML workflows using AWS SageMaker, Lambda, and EC2 with Docker. 3. *ETL Pipelines*: Develop and manage ETL pipelines using AWS Glue and integrate with structured/unstructured data sources. 4. *Full-Stack Development*: Implement APIs and full-stack components to support ML agents, including visualization tools using Streamlit. 5. *Legacy System Integration*: Reverse-engineer existing codebases and APIs to integrate AI features into legacy or proprietary systems. Required Qualifications: 1. *AWS Experience*: Hands-on experience with AWS services like Lambda, EC2, Glue, and SageMaker. 2. *Python and Full-Stack Development*: Strong Python and full-stack development experience. 3. *LLMs and Vector Search*: Solid grasp of LLMs and vector search engines. 4. *Reverse-Engineering*: Demonstrated ability to reverse-engineer systems and build integrations. 5. *Cloud Infrastructure*: Experience with cloud infrastructure, RESTful APIs, CI/CD pipelines, and containerization. Preferred Qualifications: 1. *RAG and Multi-Agent Systems*: Background in Retrieval-Augmented Generation (RAG), multi-agent systems, or knowledge graphs. 2. *Open-Source LLM Frameworks*: Experience with open-source LLM frameworks like Hugging Face. 3. *Autonomous Task Planning*: Knowledge of autonomous task planning, symbolic reasoning, or reinforcement learning. 4. *Secure AI Systems*: Exposure to secure, regulated, or enterprise-scale AI systems. This role requires a strong blend of technical skills, including AWS, Python, and full-stack development, as well as experience with LLMs, vector search, and agentic AI systems. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title : Staff Data Engineer About Trimble Trimble is a leading provider of advanced positioning solutions that maximize productivity and enhance profitability for our customers. We are an exciting, entrepreneurial company, with a history of exceptional growth coupled with a disciplined and strategic focus on being the best. While GPS is at our core, we have grown beyond this technology to embrace other sophisticated positioning technologies and, in doing so, we are changing the way the world works. Those who successfully lead others to meet our objectives are vital to our organization. Leadership at Trimble is much more than simply exercising assigned authority; we expect our leaders to embrace a mission-focused leadership style, demonstrating the strength of character, intellect and the ability to convert ideas to reality. www.trimble.com Job Summary We are looking for a highly skilled Staff Engineer for our Data and Cloud Engineering team with expertise in AWS and Azure. The ideal candidate will have a strong technical background in designing, building, developing, and implementing data pipelines and cloud solutions along with excellent technical guidance and communication skills. This role requires a strong technical background in cloud platforms, data architecture, and engineering best practices. Key Responsibilities Lead the design and implement robust, scalable, and secure cloud-based data pipelines and architectures in MS Azure. Provide technical direction and mentorship to a team of engineers Ensure best practices in code quality, architecture, and design. Design and implement secure, scalable, and high-performance cloud infrastructure. Manage cloud resources, optimize costs, ensure high availability and disaster recovery. Automate infrastructure provisioning and deployment processes using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and ARM templates. Collaborate with cross-functional teams to understand data needs and deliver comprehensive cloud solutions. Oversee cloud infrastructure management, including monitoring, maintenance, and scaling of cloud resources. Ensure compliance with industry standards and regulatory requirements. Implement data governance policies and practices and ensure high data quality, integrity, and security across all cloud platforms. Develop and enforce best practices for cloud and data engineering, including documentation and code standards. Oversee the design and development of robust data pipelines and ETL processes. Identify and implement process improvements to enhance efficiency, quality, and scalability of data engineering and cloud operations. Troubleshoot and resolve complex technical issues related to data pipelines and cloud infrastructure. Stay current with emerging technologies and industry trends to drive innovation. Tech-stack Infrastructure: Glue, Lambda, Step Function, Batch, ECS, Quicksight, Machine Learning, Sagemaker, Dagster DevOps: Cloudformation, Terraform, Git, CodeBuild Database: Redshift, PostgreSQL, DynamoDB, Athena (Trino), Snowflake, Databricks Language: Bash, Python (PySpark, Pydantic, PyArrow), SQL Qualifications Min. 6 years of proven experience as a senior data and cloud engineer or similar role. Extensive experience with AWS and Azure cloud platforms and their data services (e.g., AWS Redshift, AWS Glue, AWS S3, Azure Data LakeAzure Synapse, Snowflake, Databricks). Strong understanding of ETL processes, data warehousing, and big data technologies. Proficiency in SQL, Python, and other relevant programming languages. Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, or ARM templates. Knowledge of containerization and orchestration (e.g., Docker, Kubernetes). Understanding of cloud cost management and optimization strategies. Familiarity with CI/CD pipelines and DevOps practices. Excellent leadership, communication, and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Strong problem-solving and analytical skills. Familiarity with data visualization tools (e.g., Power BI, QuickSight) is a plus. Show more Show less
Posted 4 weeks ago
0.0 years
0 Lacs
Thiruvananthapuram, Kerala
Remote
Thiruvananthapuram Office, AEDGE AICC India Pvt Ltd About the Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About the Role We are seeking a highly motivated Senior Data Engineer to join our Data Platform team for our Edge Computing AI Platform. As a Data Engineer in our Data Platform team, you will be responsible for helping us shape the future of data ingestion, processing, and analysis, while maintaining and improving existing data systems. If you are a highly motivated individual with a passion for cutting-edge AI, cloud, edge, and infrastructure technology and are ready to take on the challenge of defining and delivering a new computing and AI platform, we would love to hear from you. Location. This role is office-based at our Trivandrum, Kerala office. What You'll Do (Key Responsibilities) Build new tools and services that support other teams’ data workflows, ingestion, processing, and distribution. Design, discuss, propose, and implement to our existing data tooling and services. Collaborate with a diverse group of people, giving and receiving feedback for growth. Execute on big opportunities and contribute to building a company culture rising to the top of the AI and Edge Computing industry. Required Qualifications 6+ years of experience in software development. Experience with data modeling, ETL/ELT processes, streaming data pipelines. Familiarity with data warehousing technologies like Databricks/Snowflake/BigQuery/Redshift and data processing platforms like Spark; working with data warehousing file formats like Avro and Parquet. Strong understanding of Storage (Object Stores, Data Virtualization) and Compute (Spark on K8S, Databricks, AWS EMR and the like) architectures used by data stack solutions and platforms. Experience with scheduler tooling like Airflow. Experience with version control systems like Git and working using a standardized git flow. Strong analytical and problem-solving skills, with the ability to work independently and collaboratively in a team environment. Professional experience developing data-heavy platforms and/or APIs. A strong understanding of distributed systems and how architectural decisions affect performance and maintainability. Bachelor’s degree in computer science, Electrical Engineering, or related field. Preferred Qualifications Experience analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability. Proficiency with a deep learning framework such as TensorFlow or Keras. Understanding of MLOps practices and practical experience with platforms like Kubeflow / Sagemaker. Compensation & Benefits For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. #LI-JV1 #LI-Onsite You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Posted 4 weeks ago
0 years
0 Lacs
Vapi, Gujarat, India
On-site
Job Title: AI Lead Engineer Location: Vapi, Gujarat Experience Required: 5+ Years Working Days: 6 Days a Week (Monday–Saturday) Industry Exposure: Manufacturing, Retail, Finance, Healthcare, life sciences or related field. Job Description: We are seeking a highly skilled and hands-on AI Lead to join our team in Vapi . The ideal candidate will have a proven track record of developing and deploying machine learning systems in real-world environments, along with the ability to lead AI projects from concept to production. You will work closely with business and technical stakeholders to drive innovation, optimize operations, and implement intelligent automation solutions. Key Responsibilities: Lead the design, development, and deployment of AI/ML models for business-critical applications. Build and implement computer vision systems (e.g., defect detection, image recognition) using frameworks like OpenCV and YOLO. Develop predictive analytics models (e.g., predictive maintenance, forecasting) using time series and machine learning algorithms such as XGBoost. Build and deploy recommendation engines and optimization models to improve operational efficiency. Establish and maintain robust MLOps pipelines using tools such as MLflow, Docker, and Jenkins. Collaborate with stakeholders across business and IT to define KPIs and deliver AI solutions aligned with organizational objectives. Integrate AI models into existing ERP or production systems using REST APIs and microservices. Mentor and guide a team of junior ML engineers and data scientists. Required Skills & Technologies: Programming Languages: Python (advanced), SQL, Bash, Java (basic) ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost DevOps & MLOps Tools: Docker, FastAPI, MLflow, Jenkins, Git Data Engineering & Visualization: Pandas, Spark, Airflow, Tableau Cloud Platforms: AWS (S3, EC2, SageMaker – basic) Specializations: Computer Vision (YOLOv8, OpenCV), NLP (spaCy, Transformers), Time Series Analysis Deployment: ONNX, REST APIs, ERP System Integration Qualifications: B.Tech / M.Tech / M.Sc in Computer Science, Data Science, or related field. 6+ years of experience in AI/ML with a strong focus on product-ready deployments. Demonstrated experience leading AI/ML teams or projects. Strong problem-solving skills and the ability to communicate effectively with cross-functional teams. Domain experience in manufacturing, retail, or healthcare preferred. What We Offer: A leadership role in an innovation-driven team Exposure to end-to-end AI product development in a dynamic industry environment Opportunities to lead, innovate, and mentor Competitive salary and benefits package 6-day work culture supporting growth and accountability This is a startup environment but with good reputable company, We are looking for someone who can work Monday to Saturday and who can lead a team and generate new solutions and ideas and lead / manage the project effectively-- Please fill this given below form before applying https://forms.gle/8b3gdxzvc2JwnYfZ6 Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Vapi, Gujarat, India
On-site
Job Title: AI Lead Engineer Location: Vapi, Gujarat Experience Required: 5+ Years Working Days: 6 Days a Week (Monday–Saturday) Industry Exposure: Manufacturing, Retail, Finance, Healthcare, life sciences or related field. Job Description: We are seeking a highly skilled and hands-on AI Lead to join our team in Vapi . The ideal candidate will have a proven track record of developing and deploying machine learning systems in real-world environments, along with the ability to lead AI projects from concept to production. You will work closely with business and technical stakeholders to drive innovation, optimize operations, and implement intelligent automation solutions. Key Responsibilities: Lead the design, development, and deployment of AI/ML models for business-critical applications. Build and implement computer vision systems (e.g., defect detection, image recognition) using frameworks like OpenCV and YOLO. Develop predictive analytics models (e.g., predictive maintenance, forecasting) using time series and machine learning algorithms such as XGBoost. Build and deploy recommendation engines and optimization models to improve operational efficiency. Establish and maintain robust MLOps pipelines using tools such as MLflow, Docker, and Jenkins. Collaborate with stakeholders across business and IT to define KPIs and deliver AI solutions aligned with organizational objectives. Integrate AI models into existing ERP or production systems using REST APIs and microservices. Mentor and guide a team of junior ML engineers and data scientists. Required Skills & Technologies: Programming Languages: Python (advanced), SQL, Bash, Java (basic) ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost DevOps & MLOps Tools: Docker, FastAPI, MLflow, Jenkins, Git Data Engineering & Visualization: Pandas, Spark, Airflow, Tableau Cloud Platforms: AWS (S3, EC2, SageMaker – basic) Specializations: Computer Vision (YOLOv8, OpenCV), NLP (spaCy, Transformers), Time Series Analysis Deployment: ONNX, REST APIs, ERP System Integration Qualifications: B.Tech / M.Tech / M.Sc in Computer Science, Data Science, or related field. 6+ years of experience in AI/ML with a strong focus on product-ready deployments. Demonstrated experience leading AI/ML teams or projects. Strong problem-solving skills and the ability to communicate effectively with cross-functional teams. Domain experience in manufacturing, retail, or healthcare preferred. What We Offer: A leadership role in an innovation-driven team Exposure to end-to-end AI product development in a dynamic industry environment Opportunities to lead, innovate, and mentor Competitive salary and benefits package 6-day work culture supporting growth and accountability This is a startup environment but with good reputable company, We are looking for someone who can work Monday to Saturday and who can lead a team and generate new solutions and ideas and lead / manage the project effectively-- Please fill this given below form before applying https://forms.gle/8b3gdxzvc2JwnYfZ6 Show more Show less
Posted 4 weeks ago
6 - 10 years
13 - 18 Lacs
Hyderabad
Remote
Hi Everyone Greetings from Intuition IT Global Recruitment Firm We hav an Exciting Job opportunity for Devops with AI Platform and Data science with our leading Client Location PAN India (Remote) Job Type: Long term Contract Job Description : Support Platform which offers infrastructure to Data science/Data Analytics/MLOps teams Resolution of issues for provisioning of new use cases in AI Platform Resolution of incidents and services requests related to AI Platform Collaborate with IAM teams for accounts provisioning Co-ordinate with other teams from Platform - AWS, Snowflake, Data bricks etc Monitor CI/CD pipelines in AI Platform Proficient in tools like AWS ( IAM, S3, EKS, SAGEMAKER, ACM, ECR, RDS, Secrets Manager, Lambda, Step Functions) DevOps tools - Jenkins, Bitbucket, Jfrog, SonarQube, Checkmarx, Kubernetes, Docker etc responsibilities Please share cv to this email id : maheshwari.p@intuition-IT.com Preferred candidate profile
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... Join Verizon as we continue to grow our industry-leading network to improve the ways people, businesses, and things connect. We are looking for an experienced, talented and motivated AI&ML Engineer to lead AI Industrialization for Verizon. You will also serve as a subject matter expert regarding the latest industry knowledge to improve the Home Product and solutions and/or processes related to Machine Learning, Deep Learning, Responsible AI, Gen AI, Natural Language Processing, Computer Vision and other AI practices. Deploying machine learning models - On Prem, Cloud and Kubernetes environments Driving data-derived insights across the business domain by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives. Creating and implementing data and ML pipelines for model inference, both in real-time and in batches. Architecting, designing, and implementing large-scale AI/ML systems in a production environment. Monitor the performance of data pipelines and make improvements as necessary What We’re Looking For... You have strong analytical skills and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end-to-end analytical solutions, and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and can interact with various partners and multi-functional teams to implement data science-driven business solutions. You'll Need To Have Bachelor's degree with four or more years of relevant work experience. Expertise in advanced analytics/ predictive modelling in a consulting role. Experience with all phases of end-to-end Analytics project Hands-on programming expertise in Python (with libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch) , R (for specific data analysis tasks) Knowledge of Machine Learning Algorithms - Linear Regression , Logistic Regression ,Decision Trees ,Random Forests ,Support Vector Machines (SVMs) ,Neural Networks (Deep Learning) ,Bayesian Networks Data Engineering - Data Cleaning and Preprocessing ,Feature Engineering ,Data Transformation , Data Visualization Cloud Platforms - AWS SageMaker ,Azure Machine Learning ,Cloud AI Platform Even better if you have one or more of the following: Advanced degree in Computer Science, Data Science, Machine Learning, or a related field. Knowledge on Home domain with key areas like Smart Home, Digital security and wellbeing Experience with stream-processing systems: Spark-Streaming, Storm etc. #TPDNONCDIOREF Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Hadoop Ecosystem Data Architecture and Engineering: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration And ETL Tools Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI And Gen AI Technologies AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Kubernetes Orchestration and Deployment: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD For Data Pipelines And Applications Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
India
Remote
Position Title: AWS AI Engineer Job Timings: 8am-5pm CST (Central Time, US) Location: Remote, India About Cloudelligent Cloudelligent is Cloud-native consultancy and AWS Advanced consulting partner! We specialize in providing bespoke cloud solutions to the SMB & enterprise segments as well as the public sector (Non-Profit Organizations). Being a next-gen cloud service provider, Cloudelligent helps small businesses & medium/large enterprises to break free from hardware lifecycle and capital expenditures, allowing them to make the most out of their cloud investment. We have an international footprint with a diverse team of domain experts, and we are customer obsessed. Job Objective As an AI Engineer at Cloudelligent, you will drive the development of innovative artificial intelligence solutions. This role emphasizes the implementation, optimization, and deployment of AI models while collaborating with cross-functional teams to enhance our products and services. The ideal candidate will possess a strong foundation in both machine learning and deep learning. Responsibilities: Implement and fine-tune pre-trained AI models tailored to meet specific industry needs. Utilize cloud-based AI services, particularly AWS Bedrock, Amazon SageMaker, Amazon Rekognition, and OpenAI, to streamline development and deployment processes. Leverage AWS data services, including Amazon Athena, Amazon Redshift, OpenSearch, Kendra and AWS Glue, for efficient data processing and analysis. Optimize AI models for performance, accuracy, and efficiency, ensuring they meet real-world requirements. Apply techniques to reduce AI hallucination and enhance model reliability. Utilize Retrieval-Augmented Generation (RAG) techniques to improve AI response quality. Work with vector databases, specifically PostgreSQL with pgvector, for efficient similarity search and information retrieval in AI applications. Collaborate with cross-functional teams to seamlessly integrate AI capabilities into our products and services. Engage in prompt engineering to enhance model interaction and output quality. Stay updated on the latest trends and advancements in AI technologies, contributing to the adoption of new tools and techniques. Requirements: Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or a related field. 4+ years of hands-on experience in AI/ML development. Strong programming skills, particularly in Python, and familiarity with AI/ML frameworks such as TensorFlow, PyTorch, or Keras. Experience with cloud-based AI services, especially AWS Bedrock, Amazon SageMaker, and Azure OpenAI. Familiarity with AWS data services, including Amazon Athena, Amazon Redshift, and AWS Glue. Knowledge of Agentic Behavior in AI, with experience in deploying models using CrewAI, LangGraph, and Langchain. Familiarity with vector databases and similarity search techniques. Solid understanding of various AI/ML algorithms and architectures, including but not limited to CNNs, RNNs, and transformers. Experience in data preprocessing, feature engineering, and model evaluation. Excellent problem-solving and communication skills. Good to Have: Experience with machine learning frameworks and libraries like TensorFlow, PyTorch, or Scikit-learn. Knowledge of big data technologies such as Apache Spark or Hadoop. Familiarity with data visualization tools and techniques. Highly Desirable: Relevant certifications (e.g., AWS Certified Machine Learning – Specialty). Show more Show less
Posted 4 weeks ago
3 - 7 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Bachelors/Masters/Computer Science/Equivalent engineering Previous Work Experience: 3-7 years (Bachelor Degree holders) In-depth experience with various AWS data services, including Amazon S3, Amazon Redshift, AWS Glue, AWS Lambda, and Amazon EMR, SQS, SNS, Step Functions, Event Bridge. Ability to design, implement, and maintain scalable data pipelines using AWS services. Strong proficiency in big data technologies such as Apache Spark and Apache Hadoop for processing and analyzing large datasets. Hands-on experience with database management systems such as Amazon RDS, DynamoDB, and others. Good knowledge in AWS OpenSearch. Experience in data ingestion services like AWS AppFlow, DMS, AWS Glue etc. Hands on experience on developing REST API’s using AWS API Gateway. Experience in real time and batch data processing in AWS environment utilizing services like Kinesis firehose, AWS Glue, AWS Lambda etc. Proficiency in programming languages such as Python, PySpark for building data applications and ETL processes. Strong scripting skills for automation and orchestration of data workflows. Solid understanding of data warehousing concepts and best practices. Experience in designing and managing data warehouses on AWS Redshift or similar platforms. Proven experience in designing and implementing Extract, Transform, Load (ETL) processes. Knowledge of AWS security best practices and the ability to implement secure data solutions. Knowledge of monitoring logs, create alerts, dashboards in AWS CloudWatch. Understanding of version control systems, such as Git. Nice To Have Skills Experience with Agile and DevOps concepts. Understanding of networking principles, including VPC design, subnets, and security groups. Experience with containerization tools such as Docker and orchestration tools like Kubernetes. Ability to deploy and manage data applications using containerized solutions. Familiarity with integrating machine learning models into data pipelines. Knowledge of AWS SageMaker or other machine learning platforms. Experience in AWS Bedrock for GEN AI integration Knowledge of monitoring tools for tracking the performance and health of data systems. Ability to optimize and fine-tune data pipelines for efficiency. Experience in AWS services like: Code Pipeline, Code Commit, Code Deploy, Code Build, Cloud Formation. Mandatory Skill Sets AWS Data engineer Preferred Skill Sets AWS Data engineer Years Of Experience Required 4-10 Education Qualification Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2