Home
Jobs

112 Aws Sagemaker Jobs - Page 4

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSECI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Role Title: Business Analytics Associate Analyst Position Summary: The Business Analysis Associate Analyst will work closely with business stakeholders, IT teams, and subject matter experts in the Global Healthcare Innovation Hub team to gather, analyze, and document business requirements for GenAI capabilities. They will translate these requirements into functional and non-functional specifications for use by IT teams in the development and integration of AI-related solutions. A minimum of two year of experience working as a business analyst or a similar role with knowledge of testing is required. Experience in GenAI and deep learning technologies being desirable. Liaise with business stakeholders, IT teams, and subject matter experts to gather, analyze, and document GenAI-related business requirements, translating them into functional and non-functional specifications for use by IT teams in the development of AI technology solutions. Collaborate with project teams to design, develop, and implement IT solutions focusing on GenAI capabilities, aligning with the organizations objectives and industry best practices. Act as a bridge between business users and IT developers, ensuring that GenAI development efforts are consistent with the business requirements and strategic goals of the organization. From a QA specific side, support the user acceptance testing process for AI-related solutions, helping to identify defects, track them to resolution, and ensure the final deliverables meet the agreed-upon requirements. Participate in ongoing monitoring and measurement of GenAI technology solutions effectiveness, providing insights and suggestions for continuous improvement. Develop and maintain documentation, such as flowcharts, use-cases, data diagrams, and user manuals, to support business stakeholders in understanding the AI features and their usage. Assist in creating project plans, timelines, and resource allocation related to GenAI initiatives, ensuring projects progress on schedule and within budget. Employ strong analytical and conceptual thinking skills to help stakeholders formulate GenAI-related requirements through interviews, workshops, and other methods. Perform testing of AI models (QA) developed as part of business requirement. Experience Required: 2+ years of experience in business, system analysis and testing, with a focus on GenAI, preferably in healthcare or a related industry. Experience Desired: Proficiency in GenAI-related business analysis methodologies, including requirements gathering, process modeling, and data analysis techniques. Strong knowledge of software development lifecycle (SDLC) methodologies for AI-related technologies, such as Agile, Scrum, or Waterfall. Experience working with IT project management tools and knowledge of AI frameworks. Experience in Jira, Confluence is a must. Proficient in common office productivity tools, such as Microsoft Office Suite, Visio, and experience with cloud services like AWS Sagemaker and Azure ML is added advantage. Technical knowledge of programming languages like C/C++, Java, Python, and understanding of database systems, IT infrastructure, and GPU optimization for AI workloads are added advantages. Excellent communication, interpersonal, and stakeholder management skills with a focus on GenAI capabilities. Relevant certifications, such as Certified Business Analysis Professional (CBAP), or AI-related credentials are a plus. Ability to design GenAI features and define user stories based on business requirements. Good understanding of the software delivery lifecycle for AI-related technologies and experience creating detailed reports and presentations. Education and Training Required: Degree in Computer Science, Artificial Intelligence, or a related field. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required. Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSECI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Role Title: Software Engineering Senior Analyst Position Summary: The AI CoE team is seeking Gen AI Fullstack Software Engineers to work on the AI ecosystem. The GenAI Fullstack Engineer will be responsible for designing, implementing, and deploying scalable and efficient AI solutions with a focus on privacy, security, and fairness. Key components include Angular and React JS based user interfaces, a robust set of Application Programming Interfaces (APIs) enabling a wide range of platform integration capabilities, a flexible Machine Learning (ML) document processing pipeline and integration with various Robotic Process Automation (RPA) platforms. This role will work directly with business partners and other IT team members to understand desired system requirements and to deliver effective solutions within the Agile methodology and participate in all phases of the development and system support life cycle. Primary responsibilities are to build an AI Platform(s) leveraging the LLM’s and other Gen AI Capabilities and provide design solution for portal integrations, enhancement, and automation requests. The ideal candidate is a technologist that brings a fresh perspective and passion to solve complex functional and technical challenges in a fast-paced and team-oriented environment. Full stack development including triage, design, coding and implementation. Build enterprise grade AI solutions with focus on privacy, security, fairness. Perform code reviews with scrum teams to approve for Production deployment. Conduct research to identify new solutions and methods to fulfill diverse and evolving business needs Establish/Improve/Maintain Proactive monitoring and management of supported assets assuring performance, availability, security, and capacity Maintains a strong and collaborative relationship with delivery partners and business stakeholders. Experience Required: This position requires a highly technical, hands-on, motivated and collaborative individual with exceptional communication skills, proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life-cycle 4+ years of total experience. 3+ years of experience in Fullstack development (Frontend, Backend and Cloud) 1+ years experience in DevOps. 1+ year in developing Gen AI Solutions and its integration with other systems. Experience Desired: ### Front-End Development Skills 1. HTML/CSS/JavaScriptProficiency in the core technologies for building web interfaces. 2. Front-End FrameworksKnowledge of frameworks like React, Angular, or Vue.js for creating interactive and responsive user interfaces. 3. UI/UX DesignBasic understanding of user experience and user interface design principles. ### Back-End Development Skills 1. Server-Side LanguagesProficiency in server-side languages such as Python, Node.js. 2. APIsExperience in building and consuming RESTful, Flask / Fast API / GraphQL APIs. 3. Database ManagementKnowledge of SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, etc. ### DevOps and Cloud Skills 1. Containerization and OrchestrationExperience with Docker and Kubernetes / OpenShift for containerization and orchestration of applications. 2. CI/CD PipelinesKnowledge of continuous integration and continuous deployment tools and practices. 3. Security Best Practicesunderstanding of security principles and best practices for protecting data and systems, including IAM, encryption and network Security. 4. Cloud ServicesFamiliarity with cloud platforms like AWS, Google Cloud, or Azure for deploying and managing applications and AI models. ### AI and Machine Learning Skills (Good to have) 1. Foundations in Machine Learning and Deep LearningUnderstanding algorithms, neural networks, supervised and unsupervised learning, and deep learning frameworks like TensorFlow, PyTorch, and Keras. 2. Generative ModelsKnowledge of generative models such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and Transformers. 3. Natural Language Processing (NLP)Knowledge in NLP techniques and libraries (e.g., spaCy, NLTK, Hugging Face Transformers) for text generation tasks. 4. Model DeploymentExperience with deploying models using services like TensorFlow Serving, TorchServe, or cloud-based solutions (e.g., AWS SageMaker, Google AI Platform). 5. Basic understanding of implementing Prompt Engineering, Finetuning and RAG. ### Additional Skills 1. Version ControlProficiency with Git and version control workflows. 2. Software Development PracticesUnderstanding of agile methodologies, testing, and code review practices. Education and Training Required: Degree in Computer Science, Artificial Intelligence, or a related field. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required. Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Your future duties and responsibilities: Skill: pgvector,Vertex AI, FastAPI, Flask, Kubernetes Develops and optimizes AI applications for production, ensuring seamless integration with enterprise systems and front-end applications. Builds scalable API layers and microservices using FastAPI, Flask, Docker, and Kubernetes to serve AI models in real-world environments Implements and maintains AI pipelines with MLOps best practices, leveraging tools like Azure ML, Databricks, AWS SageMaker, and Vertex AI Ensures high availability, reliability, and performance of AI systems through rigorous testing, monitoring, and optimization Works with agentic frameworks such as LangChain, LangGraph, and AutoGen to build adaptive AI agents and workflows Experience with GCP, AWS, or Azure - utilizing services such as Vertex AI, Bedrock, or Azure Open AI model endpoints Hands on experience with vector databases such as pgvector, Milvus, Azure Search, AWS OpenSearch, and embedding models such as Ada, Titan, etc. Collaborates with architects and scientists to transition AI models from research to fully functional, high-performance production systems. Skills: Azure Search Flask Kubernetes

Posted 3 weeks ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Backend Developer Responsibilities & Skills Position Title Backend Developer Position Type Full time permanent Location Bengaluru, India Company Description Privaini is the pioneer of privacy risk management for companies and their entire business networks. Privaini offers a unique "outside-in approach", empowering companies to gain a comprehensive understanding of both internal and external privacy risks. It provides actionable insights using a data-driven, systematic, and automated approach to proactively address reputation and legal risks related to data privacy. Privaini generates AI-powered privacy profile and privacy score for a company from externally observable privacy, corporate, regulatory, historical events, and security data. Without the need for time-consuming questionnaires or installing any software, Privaini creates standardized privacy views of companies from externally observable information. Then Privaini builds a privacy risk posture for every business partner in the company's business network and continuously monitors each one. Our platform provides actionable insights that privacy & risk teams can readily implement. Be part of an exciting team of researchers, developers, and data scientists focused on the mission of building transparency in data privacy risks for companies and their business networks. Key Responsibilities Strong Python, Flask, REST API, and NoSQL skills. Familiarity with Docker is a plus. AWS Developer Associate certification is required. AWS Professional Certification is preferred. Architect, build, and maintain secure, scalable backend services on AWS platforms. Utilize core AWS services like Lambda, DynamoDB, API Gateways, and serverless technologies. Design and deliver RESTful APIs using Python Flask framework. Leverage NoSQL databases and design efficient data models for large user bases. Integrate with web services APIs and external systems. Apply AWS Sagemaker for machine learning and analytics (optional but preferred). Collaborate effectively with diverse teams (business analysts, data scientists, etc.). Troubleshoot and resolve technical issues within distributed systems. Employ Agile methodologies (JIRA, Git) and adhere to best practices. Continuously learn and adapt to new technologies and industry standards. Qualifications A bachelors degree in computer science, information technology or any relevant disciplines is required. A masters degree is preferred. At least 6 years of development experience, with 5+ years of experience in AWS. Must have demonstrated skills in planning, designing, developing, architecting, and implementing applications. Additional Information At Privaini Software India Private Limited, we value diversity and always treat all employees and job applicants based on merit, qualifications, competence, and talent. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 3 weeks ago

Apply

11.0 - 21.0 years

18 - 32 Lacs

Bengaluru

Work from Office

Naukri logo

Mandatory Skills Strong proficiency in Generative AI, Large Language Models (LLMs), deep learning, agentic frameworks, and RAG setup. Experience in designing and implementing machine learning models using scikit-learn and TensorFlow. Hands-on expertise with AI/ML frameworks such as Hugging Face, LangChain, LangGraph, and PyTorch. Cloud AI services experience, particularly in AWS SageMaker and AWS Bedrock. MLOps & DevOps: Knowledge of data pipeline setup, Apache Airflow, CI/CD, and containerization (Docker, Kubernetes). API Development: Ability to develop and maintain APIs following RESTful principles. Technical proficiency in Elastic, Python, YAML, and system integrations. Nice to Have skills - Experience with Observability, Ansible, Terraform, Git, Microservices, AIOps, and scripting in Python. Familiarity with AI cloud services such as Azure OpenAI and Google Vertex AI.

Posted 4 weeks ago

Apply

12.0 - 17.0 years

15 - 20 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Tech Architect AWS AI (Anthropic) Experience: - 12+ years of total IT experience, with a minimum of 8 years in AI/ML architecture and solution development. Strong hands-on expertise in designing and building GenAI solutions using AWS services such as Amazon Bedrock, SageMaker, and Anthropic Claude models. Role Overview:- The Tech Architect AWS AI (Anthropic) will be responsible for translating AI solution requirements into scalable and secure AWS-native architectures. This role combines architectural leadership with hands-on technical depth in GenAI model integration, data pipelines, and deployment using Amazon Bedrock and Claude models. The ideal candidate will bridge the gap between strategic AI vision and engineering execution while ensuring alignment with enterprise cloud and security standards. Key Responsibilities: - Design robust, scalable architectures for GenAI use cases using Amazon Bedrock and Anthropic Claude. Lead architectural decisions involving model orchestration, prompt optimization, RAG pipelines, and API integration. Define best practices for implementing AI workflows using SageMaker, Lambda, API Gateway, and Step Functions. Review and validate implementation approaches with tech leads and developers; ensure alignment with architecture blueprints. Contribute to client proposals, solution pitch decks, and technical sections of RFP/RFI responses. Ensure AI solutions meet enterprise requirements for security, privacy, compliance, and performance. Collaborate with cloud infrastructure, data engineering, and DevOps teams to ensure seamless deployment and monitoring. Stay updated on AWS Bedrock advancements, Claude model improvements, and best practices for GenAI governance. Required Skills and Competencies: - Deep hands-on experience with Amazon Bedrock, Claude (Anthropic), Amazon Titan, and embedding-based workflows. Proficient in Python and cloud-native API development; experienced with JSON, RESTful integrations, and serverless orchestration. Strong understanding of SageMaker (model training, tuning, pipelines), real-time inference, and deployment strategies. Knowledge of RAG architectures, vector search (e.g., OpenSearch, Pinecone), and prompt engineering techniques. Expertise in IAM, encryption, access control, and responsible AI principles for secure AI deployments. Ability to create and communicate high-quality architectural diagrams and technical documentation. Desirable Qualifications: AWS Certified Machine Learning Specialty and/or AWS Certified Solutions Architect Professional. Familiarity with LangChain, Haystack, Semantic Kernel in AWS context. Experience with enterprise-grade GenAI use cases such as intelligent search, document summarization, conversational AI, and code copilots. Exposure to integrating third-party model APIs and services available via AWS Marketplace. Soft Skills: Strong articulation and technical storytelling capabilities for client and executive conversations. Proven leadership in cross-functional project environments with globally distributed teams. Analytical mindset with a focus on delivering reliable, maintainable, and performant AI solutions. Self-driven, curious, and continuously exploring innovations in GenAI and AWS services. Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 4 weeks ago

Apply

1.0 - 5.0 years

27 - 32 Lacs

Karnataka

Work from Office

Naukri logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations Since 2011, our mission hasnt changed "” were here to stop breaches, and weve redefined modern security with the worlds most advanced AI-native platform We work on large scale distributed systems, processing almost 3 trillion events per day We have 3.44 PB of RAM deployed across our fleet of C* servers and this traffic is growing daily Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward Were also a mission-driven company We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers Were always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other Ready to join a mission that mattersThe future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML model development lifecycle, ML engineering, and Insights Activation This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company We processdata at a truly immense scale The data sets we process are composed of various facets including telemetry data, associated metadata, IT asset information, contextual formation about threat exposure, and many more These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse. We are seeking a strategic and technically savvy leader to head our Data and ML Platform team As the head, you will be responsible for defining and building our ML Experimentation Platform from the ground up, while scaling our data and ML infrastructure to support various roles including Data Platform Engineers, Data Scientists, and Threat Analysts Your key responsibilities will involve overseeing the design, implementation, and maintenance of scalable ML pipelines for data preparation, cataloging, feature engineering, model training, model serving, and in-field model performance monitoring These efforts will directly influence critical business decisions In this role, you'll foster a production-focused culture that effectively bridgesthe gap between model development and operational success Furthermore, you'll be at the forefront of spearheading our ongoing Generative AI investments The ideal candidate for this position will combine strategic vision with hands-on technical expertise in machine learning and data infrastructure, driving innovation and excellence across our data and ML initiatives We are building this team with ownership at Bengaluru, India, this leader will help us boot strap the entire site, starting with this team. What You'll Do Strategic Leadership Define the vision, strategy and roadmap for the organizations data and ML platform to align with critical business goals. Help design, build, and facilitate adoption of a modern Data+ML platform Stay updated on emerging technologies and trends in data platform, ML Ops and AI/ML Team Management Build a team of Data and ML Platform engineers from a small footprint across multiple geographies Foster a culture of innovation and strong customer commitment for both internal and external stakeholders Platform Development Oversee the design and implementation of a platform containing data pipelines, feature stores and model deployment frameworks. Develop and enhance ML Ops practices to streamline model lifecycle Management from development to production. Data Governance Institute best practices for data security, compliance and quality to ensure safe and secure use of AI/ML models. Stakeholder engagement Partner with product, engineering and data science teams to understand requirements and translate them into platform capabilities. Communicate progress and impact to executive leadership and key stakeholders. Operational Excellence Establish SLI/SLO metrics for Observability of the Data and ML Platform along with alerting to ensure a high level of reliability and performance. Drive continuous improvement through data-driven insights and operational metrics. What You'll Need S 10+ years experience in data engineering, ML platform development, or related fields with at least 5 years in a leadership role. Familiarity with typical machine learning algorithms from an engineering perspective; familiarity with supervised / unsupervised approacheshow, why and when labeled data is created and used. Knowledge of ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI, etc. Experience with modern ML Ops platforms such as MLFLow, Kubeflow or SageMaker preferred.Experience in data platform product(s) and frameworks like Apache Spark, Flink or comparable tools in GCP and orchestration technologies (e.g Kubernetes, Airflow) Experience with Apache Iceberg is a plus. Deep understanding of machine learning workflows, including model training, deployment and monitoring. Familiarity with data visualization tools and techniques. Experience with boot strapping new teams and growing them to make a large impact. Experience operating as a site lead within a company will be a bonus. Exceptional interpersonal and communication skills Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role s, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified„¢ across the globe CrowdStrike is proud to be an equal opportunity employer We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 4 weeks ago

Apply

1.0 - 5.0 years

8 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less

Posted 4 weeks ago

Apply

1.0 - 5.0 years

8 - 12 Lacs

Thane

Work from Office

Naukri logo

Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less

Posted 4 weeks ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 4 weeks ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Description - External Role – AIML Data Scientist Location : Kochi Mode of Interview - In Person Date : 14th June 2025 (Saturday) Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 4 weeks ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Certified AWS Consultant with hands-on experience in AI platform development projects Experience in setting up, maintaining, and developing cloud infrastructure Proficiency with Infrastructure as Code tools such as CloudFormation and/or Terraform Strong knowledge of AWS services including SageMaker , S3 , EC2 , etc. In-depth proficiency in at least one high-level programming language ( Python , Java , etc.) Good understanding of data analytics use cases and AI/ML technologies Primary Skills SageMaker,S3, EC2 CloudFormation / Terraform Java/Python AI/ML

Posted 1 month ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Udupi, Karnataka, India

On-site

Foundit logo

As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 1 month ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 1 month ago

Apply

0.0 - 4.0 years

0 - 4 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

As an MLOps Engineer, you will be responsible for building and optimizing our machine learning infrastructure. You will leverage AWS services, containerization, and automation to streamline the deployment and monitoring of ML models. Your expertise in MLOps best practices, combined with your experience in managing large ML operations, will ensure our models are effectively deployed, managed, and maintained in production environments. Responsibilities: Machine Learning Operations (MLOps) & Deployment: Build, deploy, and manage ML models in production using AWS SageMaker, AWS Lambda, and other relevant AWS services. Develop automated pipelines for model training, validation, deployment, and monitoring to ensure high availability and low latency. Implement best practices for CI/CD in ML model deployment and manage versioning for seamless updates. Infrastructure Development & Optimization: Design and maintain scalable, efficient, and secure infrastructure for machine learning operations using AWS services (e.g., EC2, S3, SageMaker, ECR, ECS/EKS). Leverage containerization (Docker, Kubernetes) to deploy models as microservices, optimizing for scalability and resilience. Manage infrastructure as code (IaC) using tools like Terraform, AWS CloudFormation, or similar, ensuring reliable and reproducible environments. Model Monitoring & Maintenance: Set up monitoring, logging, and alerting for deployed models to track model performance, detect anomalies, and ensure uptime. Implement feedback loops to enable automated model retraining based on new data, ensuring models remain accurate and relevant over time. Troubleshoot and resolve issues in the ML pipeline and infrastructure to maintain seamless operations. AWS Connect & Integration: Integrate machine learning models with AWS Connect or similar services for customer interaction workflows, providing real-time insights and automation. Work closely with cross-functional teams to ensure models can be easily accessed and utilized by various applications and stakeholders. Collaboration & Stakeholder Engagement: Collaborate with data scientists, engineers, and DevOps teams to ensure alignment on project goals, data requirements, and model deployment standards. Provide technical guidance on MLOps best practices and educate team members on efficient ML deployment and monitoring processes. Actively participate in project planning, architecture decisions, and road mapping sessions to improve our ML infrastructure. Security & Compliance: Implement data security and compliance measures, ensuring all deployed models meet organizational and regulatory standards. Apply appropriate data encryption and manage access controls to safeguard sensitive information used in ML models. Requirements: Bachelor s or Master s degree in Computer Science, Engineering, or a related field. Experience: 5+ years of experience as an MLOps Engineer, DevOps Engineer, or similar role focused on machine learning deployment and operations. Strong expertise in AWS services, particularly SageMaker, EC2, S3, Lambda, and ECR/ECS/EKS. Proficiency in Python, including ML-focused libraries like scikit-learn and data manipulation libraries like pandas. Hands-on experience with containerization tools such as Docker and Kubernetes. Familiarity with infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Experience with CI/CD pipelines, Git, and version control for ML model deployment. MLOps & Model Management: Proven experience in managing large ML projects, including model deployment, monitoring, and maintenance. AWS Connect & Integration: Understanding of AWS Connect for customer interactions and integration with ML models. Soft Skills: Strong communication and collaboration skills, with the ability to explain technical concepts to non-technical stakeholders. Experience with data streaming and message queues (e.g., Kafka, AWS Kinesis). Familiarity with monitoring tools like Prometheus, Grafana, or CloudWatch for tracking model performance. Knowledge of data governance, security, and compliance requirements related to ML data handling. Certification in AWS or relevant cloud platforms. Work Schedule: This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities: Design and develop machine learning and deep learning models for tasks such as text classification , entity recognition , sentiment analysis , and document intelligence . Build and optimize NLP pipelines using models like BERT , GPT , LayoutLM , and Transformer architectures . Implement and experiment with Generative AI techniques using frameworks like Hugging Face , OpenAI APIs , and PyTorch/TensorFlow . Perform data collection, web scraping , data cleaning , and feature engineering for structured and unstructured data sources. Deploy ML models using Docker , Kubernetes , and implement CI/CD pipelines for scalable and automated workflows. Use cloud services (e.g., GCP , Azure AI ) for model hosting, data storage, and compute resources. Collaborate with cross-functional teams to integrate ML models into production-grade applications. Apply MLOps practices including model versioning, monitoring, retraining pipelines, and reproducibility. Technical Skills: Languages & Libraries: Python, Pandas, NumPy, Scikit-learn, TensorFlow, Keras, PyTorch, OpenCV, Seaborn, XGBoost, NLTK, Hugging Face, BeautifulSoup, Selenium, Scrapy Modeling & NLP: Logistic Regression, Random Forest, SVM, CNN, RNN, Transformers, BERT, GPT, LLMs Tools & Platforms: Git, Docker, Kubernetes, CI/CD, Azure AI Services, GCP, Vertex AI Concepts: Machine Learning, Deep Learning, MLOps, Generative AI, Text Analytics, Predictive Analytics Databases & Querying: Basics of SQL Other Skills: Data Visualization (Matplotlib, Seaborn), Model Optimization, Version Control

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

Role Overview Join our Pune AI Center of Excellence to drive software and product development in the AI space. As an AI/ML Engineer, youll build and ship core components of our AI products—owning end-to-end RAG pipelines, persona-driven fine-tuning, and scalable inference systems that power next-generation user experiences. Key Responsibilities Model Fine-Tuning & Persona Design Adapt and fine-tune open-source large language models (LLMs) (e.g. CodeLlama, StarCoder) to specific product domains. Define and implement “personas” (tone, knowledge scope, guardrails) at inference time to align with product requirements. RAG Architecture & Vector Search Build retrieval-augmented generation systems: ingest documents, compute embeddings, and serve with FAISS, Pinecone, or ChromaDB. Design semantic chunking strategies and optimize context-window management for product scalability. Software Pipeline & Product Integration Develop production-grade Python data pipelines (ETL) for real-time vector indexing and updates. Containerize model services in Docker/Kubernetes and integrate into CI/CD workflows for rapid iteration. Inference Optimization & Monitoring Quantize and benchmark models for CPU/GPU efficiency; implement dynamic batching and caching to meet product SLAs. Instrument monitoring dashboards (Prometheus/Grafana) to track latency, throughput, error rates, and cost. Prompt Engineering & UX Evaluation Craft, test, and iterate prompts for chatbots, summarization, and content extraction within the product UI. Define and track evaluation metrics (ROUGE, BLEU, human feedback) to continuously improve the product’s AI outputs. Must-Have Skills ML/AI Experience: 3–4 years in machine learning and generative AI, including 18 months on LLM- based products. Programming & Frameworks: Python, PyTorch (or TensorFlow), Hugging Face Transformers. RAG & Embeddings: Hands-on with FAISS, Pinecone, or ChromaDB and semantic chunking. Fine-Tuning & Quantization: Experience with LoRA/QLoRA, 4-bit/8-bit quantization, and model context protocol (MCP). Prompt & Persona Engineering: Deep expertise in prompt-tuning and persona specification for product use cases. Deployment & Orchestration: Docker, Kubernetes fundamentals, CI/CD pipelines, and GPU setup. Nice-to-Have Multi-modal AI combining text, images, or tabular data. Agentic AI systems with reasoning and planning loops. Knowledge-graph integration for enhanced retrieval. Cloud AI services (AWS SageMaker, GCP Vertex AI, or Azure Machine Learning)

Posted 1 month ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in AI Services on OCI and/or other cloud platforms (AWS, Azure, Google Cloud) 8+ years of professional work experience Proven experience with end-to-end AI solution implementation, from data integration to model deployment and optimization. Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG workflows. Proficiency in frameworks such as TensorFlow, PyTorch, scikit-learn, Keras and programming languages such as Python, R, or SQL.Experience with data wrangling, data pipelines, and data integration tools. Hands-on experience with LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes to scale AI models. Expertise in analytics platforms like Power BI, Tableau, or other business intelligence tools. Experience working with cloud platforms, particularly for AI and analytics workloads. Familiarity with cloud-based AI services like OCI AI, AWS SageMaker etc Experience with building and optimizing data pipelines for large-scale AI/ML applications using tools like Apache Kafka, Apache Spark, Apache Airflow, or similar. Excellent communication skills, with the ability to clearly explain complex AI and analytics concepts to non-technical stakeholders. Proven ability to work with diverse teams and manage client expectations Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What Youll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source cloud-native technologies by ISV customers. Design, implement, and optimize AI and analytics solutions using OCI AI & Analytics Services that enable advanced analytics and AI use cases. Assist clients to architect & deploy AI systems that integrate seamlessly with existing client infrastructure, ensuring scalability, performance, and security. Support the deployment of machine learning models, including model training, testing, and fine-tuning. Ensure scalability, robustness, and performance of AI models in production environments. Design, build, and deploy end-to-end AI solutions with a focus on LLMs and Agentic AI workflows (including Proactive, Reactive, RAG etc.). Help customer migrate from other Cloud vendors AI platform or bring their own AI/ML models leveraging OCI AI services and Data Science platform. Design, propose and implement solution on OCI that helps customers move seamlessly when adopting OCI for their AI requirements Provides direction and specialist knowledge to clients in developing AI chatbots using ODA (Oracle digital Assistance), OIC (Oracle integration cloud) and OCI GenAI services. Configure, integrate, and customize analytics platforms and dashboards on OCI. Implement data pipelines and ensure seamless integration with existing IT infrastructure. Drive discussions on OCI GenAI and AI Platform across the region and accelerate implementation of OCI AI services into Production

Posted 1 month ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 1 month ago

Apply

2.0 - 7.0 years

11 - 21 Lacs

Pune

Hybrid

Naukri logo

Rapid7, a global cybersecurity company, is expanding its AI Centre of Excellence in India. We seek a Senior AI Engineer (MLOps) to build and manage MLOps infrastructure, deploy ML models, and support AI-powered threat detection systems. Work Location: Amar Tech Park Balewadi - Hinjawadi Rd, Patil Nagar, Balewadi, Pune, Maharashtra 411045 Key Responsibilities: Build and deploy ML/LLM models in AWS using Sagemaker, Terraform. Develop APIs/interfaces using Python, TypeScript, FastAPI/Flask. Manage data pipelines, model lifecycle, observability, and guardrails. Collaborate with cross-functional teams; follow agile and DevOps best practices. Requirements: 5+ years in software engineering, 3-5 years in ML deployment (AWS). Proficient in Python, TypeScript, Docker, Kubernetes, CI/CD. Experience with LLMs, GPU resources, and ML monitoring. Nice to Have: NLP, model risk management, scalable ML systems. Rapid7 values innovation, diversity, and ethical AIideal for engineers seeking impact in cybersecurity.

Posted 1 month ago

Apply

15 - 24 years

20 - 35 Lacs

Kochi, Chennai, Thiruvananthapuram

Work from Office

Naukri logo

Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , alerting , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services .

Posted 1 month ago

Apply

11 - 14 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Role: MLOps Engineer Location: PAN India Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 1 month ago

Apply

4 - 6 years

18 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

POSITION: MLOps Engineer LOCATION: Bangalore (Hybrid) Work timings - 12 pm - 9 pm Budget - Maximum 20 LPA ROLE OBJECTIVE The MLOps Engineer position will support various segments by enhancing and optimizing the deployment and operationalization of machine learning models. The primary objective is to collaborate with data scientists, data engineers, and business stakeholders to ensure efficient, scalable, and reliable ML model deployment and monitoring. The role involves integrating ML models into production systems, automating workflows, and maintaining robust CI/CD pipelines. RESPONSIBILITIES Model Deployment and Operationalization : Implement, manage, and optimize the deployment of machine learning models into production environments. CI/CD Pipelines: Develop and maintain continuous integration and continuous deployment pipelines to streamline the deployment process of ML models. Infrastructure Management: Design and manage scalable, reliable, and secure cloud infrastructure for ML workloads using platforms like AWS and Azure. Monitoring and Logging: Implement monitoring, logging, and alerting mechanisms to ensure the performance and reliability of deployed models. Automation: Automate ML workflows, including data preprocessing, model training, validation, and deployment using tools like Kubeflow, MLflow, and Airflow. Collaboration: Work closely with data scientists, data engineers, and business stakeholders to understand requirements and deliver solutions. Security and Compliance : Ensure that ML models and data workflows comply with security, privacy, and regulatory requirements. Performance Optimization : Optimize the performance of ML models and the underlying infrastructure for speed and cost-efficiency. EXPERIENCE Years of Experience: 4-6 years of experience in ML model deployment and operationalization. Technical Expertise : Proficiency in Python, Azure ML, AWS Sagemaker, and other ML tools and frameworks. Cloud Platforms: Extensive experience with cloud platforms such as AWS and Azure Cloud Platform. Containerization and Orchestration: Hands-on experience with Docker and Kubernetes for containerization and orchestration of ML workloads. EDUCATION/KNOWLEDGE Educational Qualification : Master's degree (preferably in Computer Science) or B.Tech / B.E. Domain Knowledge: Familiarity with EMEA business operations is a plus. OTHER IMPORTANT NOTES Flexible Shifts : Must be willing to work flexible shifts. Team Collaboration: Experience with team collaboration and cloud tools. Algorithm Building and Deployment : Proficiency in building and deploying algorithms using Azure/AWS platforms. Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 1 month ago

Apply

5 - 10 years

25 - 30 Lacs

Mumbai, Navi Mumbai, Chennai

Work from Office

Naukri logo

We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.

Posted 1 month ago

Apply

12 - 22 years

50 - 55 Lacs

Hyderabad, Gurugram

Work from Office

Naukri logo

Job Summary Director, Collection Platforms and AI As a director, you will be essential to drive customer satisfaction by delivering tangible business results to the customers. You will be working for the Enterprise Data Organization and will be an advocate and problem solver for the customers in your portfolio as part of the Collection Platforms and AI team. You will be using communication and problem-solving skills to support the customer on their automation journey with emerging automation tools to build and deliver end to end automation solutions for them. Team Collection Platforms and AI Enterprise Data Organizations objective is to drive growth across S&P divisions, enhance speed and productivity in our operations, and prepare our data estate for the future, benefiting our customers. Therefore, automation represents a massive opportunity to improve quality and efficiency, to expand into new markets and products, and to create customer and shareholder value. Agentic automation is the next frontier in intelligent process evolution, combining AI agents, orchestration layers, and cloud-native infrastructure to enable autonomous decision-making and task execution. To leverage the advancements in automation tools, its imperative to not only invest in the technologies but also democratize them, build literacy, and empower the work force. The Collection Platforms and AI team's mission is to drive this automation strategy across S&P Global and help create a truly digital workplace. We are responsible for creating, planning, and delivering transformational projects for the company using state of the art technologies and data science methods, developed either in house or in partnership with vendors. We are transforming the way we are collecting the essential intelligence our customers need to do decision with conviction, delivering it faster and at scale while maintaining the highest quality standards. What were looking for ? You will lead the design, development, and scaling of AI-driven agentic pipelines to transform workflows across S&P Global. This role requires a strategic leader who can architect end-to-end automation solutions using agentic frameworks, cloud infrastructure, and orchestration tools while managing senior stakeholders and driving adoption at scale. A visionary technical leader with knowledge of designing agentic pipelines and deploying AI applications in production environments. Understanding of cloud infrastructure (AWS/Azure/GCP), orchestration tools (e.g., Airflow, Kubeflow), and agentic frameworks (e.g., LangChain, AutoGen). Proven ability to translate business workflows into automation solutions, with emphasis on financial/data services use cases. An independent proactive person who is innovative, adaptable, creative, and detailed-oriented with high energy and a positive attitude. Exceptional skills in listening to clients, articulating ideas, and complex information in a clear and concise manner. Proven record of creating and maintaining strong relationships with senior members of client organizations, addressing their needs, and maintaining a high level of client satisfaction. Ability to understand what the right solution is for all type of problems, understanding and identifying the ultimate value of each project. Operationalize this technology across S&P Global, delivering scalable solutions that enhance efficiency, reduce latency, and unlock new capabilities for internal and external clients. Exceptional communication skills with experience presenting to C-level executives Responsibilities Engage with the multiple client areas (external and internal) and truly understand their problem and then deliver and support solutions that fit their needs. Understand the existing S&P Global product to leverage existing products as necessary to deliver a seamless end to end solution to the client. Evangelize agentic capabilities through workshops, demos, and executive briefings. Educate and spread awareness within the external client-base about automation capabilities to increase usage and idea generation. Increase automation adoption by focusing on distinct users and distinct processes. Deliver exceptional communication to multiple layers of management for the client. Provide automation training, coaching, and assistance specific to a users role. Demonstrate strong working knowledge of automation features to meet evolving client needs. Extensive knowledge and literacy of the suite of products and services offered through ongoing enhancements, and new offerings and how they fulfill customer needs. Establish monitoring frameworks for agent performance, drift detection, and self-healing mechanisms. Develop governance models for ethical AI agent deployment and compliance. Preferred Qualification 12+ years work experience with 5+ years in the Automation/AI space Knowledge of: Cloud platforms (AWS SageMaker, Azure ML; etc) Orchestration tools (Prefect, Airflow; etc) Agentic toolkits (LangChain, LlamaIndex, AutoGen) Experience in productionizing AI applications. Strong programming skills in python and common AI frameworks Experience with multi-modal LLMs and integrating vision and text for autonomous agents. Excellent written and oral communication in English Excellent presentation skills with a high degree of comfort speaking with senior executives, IT Management, and developers. Hands-on ability to build quick prototype/visuals to assist with high level product concepts and capabilities. Experience in deployment and management of applications utilizing cloud-based infrastructure. A desire to work in a fast-paced and challenging work environment Ability to work in a cross functional, multi geographic teams

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies