Home
Jobs

226 Mlops Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7 - 10 years

5 - 9 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

What Youll Do Design & Implement an enterprise data management strategy aligned with business process, focusing on data models designs, database development standards and data management frameworks Develop and maintain data management and governance frameworks to ensure data quality, consistency and compliance for different Discover domains such as Multi omics, In Vivo, Ex Vivo, In Vitro datasets Design and develop scalable cloud based (AWS or Azure) solutions following enterprise standards Design robust data model for semi-structured/structured datasets by following various modelling techniques Design & implement of complex ETL data-pipelines to handle various semi-structured/structured datasets coming from Labs and scientific platforms Work with LAB ecosystems (ELNs, LIMS, CDS etc) to build Integration & data solutions around them Collaborate with various stakeholders, including data scientists, researchers, and IT, to optimize data utilization and align data strategies with organizational goals Stay abreast of the latest trends in Data management technologies and introduce innovative approaches to data analysis and pipeline development. Lead projects from conception to completion, ensuring alignment with enterprise goals and standards. Communicate complex technical details effectively to both technical and non-technical stakeholders. What Youll Bring Minimum of 7+ years of hands-on experience in developing data management solutions solving problems in Discovery/ Research domain Advanced knowledge of data management tools and frameworks, such as SQL/NoSQL, ETL/ELT tools, and data visualization tools across various private clouds Strong experience in following: Cloud based DBMS/Data warehouse offerings AWS Redshift, AWS RDS/Aurora, Snowflake, Databricks ETL tools Cloud based tools Well versed with different cloud computing offerings in AWS and Azure Well aware of Industry followed data security and governance norms Building API Integration layers b/w multiple systems Hands-on experience with data platforms technologies like: Databricks, AWS, Snowflake, HPC ( certifications will be a plus) Strong programming skills in languages such as Python, R Strong organizational and leadership skills. Bachelors or Masters degree in Computational Biology, Computer Science, or a related field. Ph.D. is a plus. Preferred/Good To Have MLOps expertise leveraging ML Platforms like Dataiku, Databricks, Sagemaker Experience with Other technologies like Data Sharing (eg. Starburst), Data Virtualization (Denodo), API Management (mulesoft etc) Cloud Solution Architect certification (like AWS SA Professional or others)

Posted 2 months ago

Apply

8 - 13 years

25 - 40 Lacs

Chennai, Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Job Title : AWS MLOps Engineer CI/CD Pipeline for Machine Learning Solutions (IT Department) Location : Mumbai/Pune/Chennai (Work from Office) About Clover InfoTech: With 30 years of IT excellence, Clover Infotech is a leading global IT services and consulting company. Our 5000+ experts specialized in Oracle, Microsoft, and Open Source technologies, delivering solutions in application and technology modernization, cloud enablement, data management, automation, and assurance services. We help enterprises on their transformation journey by implementing business-critical applications and supporting technology infrastructure through a proven managed services model. Our SLA-based delivery ensures operational efficiency, cost-effectiveness, and enhanced information security. We proudly partner with companies ranging from Fortune 500 companies to emerging enterprises and new-age startups. We offer technology-powered solutions that accelerate growth and drive success. Job Description : Job Title: AWS MLOps Engineer – CI/CD Pipeline for Machine Learning Solutions (IT Department) Job Summary: Looking for an experienced AWS Engineer to join our IT Department to manage and optimize the Continuous Integration/Continuous Deployment (CI/CD) pipeline for machine learning (ML) solutions. The successful candidate will work closely with Model Development team (ML Engineers), data scientists, and DevOps teams to ensure smooth deployment, scaling, and monitoring of ML models on AWS. This role requires a deep understanding of AWS cloud services, DevOps practices, and machine learning infrastructure. Key Responsibilities: 1. CI/CD Pipeline Management & Automation: 1. Design, implement, and maintain robust CI/CD pipelines for deploying machine learning models and solutions. 2. Automate and streamline deployment processes using AWS services such as CodePipeline, CodeBuild, CodeDeploy, and CodeCommit. 3. Ensure seamless integration of model training, testing, and deployment stages within the CI/CD pipeline. 4. Set up and manage infrastructure as code (IaC) using tools like AWS CloudFormation or Terraform for creating scalable and reliable environments for ML applications. 5. Automate deployment, scaling, and monitoring of machine learning models in AWS environments using AWS Lambda, ECS, EKS, and SageMaker. 2. AWS Cloud Services Management & Security: a. Manage and configure AWS cloud services such as EC2, S3, SageMaker, Lambda, and others to support machine learning pipelines and production environments. b. Use AWS SageMaker for managing the ML lifecycle, including data preparation, training, tuning, and model deployment. c. Set up automated workflows for model retraining and versioning based on new data inputs and performance metrics. d. Ensure compliance with industry standards and internal policies regarding data privacy, security, and governance for machine learning solutions. e. Implement best practices in DevOps, including version control, code quality checks, and deployment automation using AWS services. f. Continuously improve infrastructure by staying up-to-date with new AWS features, best practices, and emerging technologies. 3. Monitoring & Optimization: a. Monitor the performance of deployed ML models and pipelines using AWS CloudWatch, CloudTrail, and other monitoring tools. b. Implement automated testing, validation, and monitoring processes to ensure models perform as expected in production environments. c. Optimize costs and performance by automating resource scaling, ensuring high availability, and improving pipeline efficiency. 4. Collaboration & Support: a. Collaborate with data scientists, machine learning engineers, and DevOps teams to integrate ML models into production systems. b. Provide support and troubleshooting expertise for pipeline issues, including model failures, deployment bottlenecks, and scaling problems. c. Work closely with security teams to implement best practices for security and compliance, ensuring that data and models are protected within AWS. Key Skills & Qualifications: a. Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. b. Experience : 8+ years of experience as an AWS Engineer, DevOps Engineer, or Cloud Engineer, with a focus on CI/CD pipelines and machine learning solutions. c. Strong expertise in AWS cloud services (S3, EC2, SageMaker, Lambda, CodePipeline, etc.). d. Experience with CI/CD tools like AWS CodePipeline, GitLab CI, or similar platforms. e. Proficiency with containerization tools such as Docker and Kubernetes for managing microservices architecture. f. Knowledge of infrastructure as code (IaC) tools such as AWS CloudFormation, Terraform etc. g. Familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch) and deployment in production environments. h. Experience in setting up and managing CI/CD pipelines for machine learning or data science solutions. i. Familiarity with version control systems like Git and deployment automation practices. j. Strong knowledge of monitoring and logging tools (e.g., AWS CloudWatch) for real-time performance tracking. k. Ability to work collaboratively with cross-functional teams, including data scientists, ML engineers, and DevOps teams. l. Strong verbal and written communication skills for documentation and knowledge sharing. Thank you for considering this opportunity with Clover InfoTech. We look forward to hearing from you! Regards Jitendrra Chauhan

Posted 2 months ago

Apply

4 - 9 years

10 - 18 Lacs

Pune, Bengaluru, Kolkata

Hybrid

Naukri logo

6+ years experience Work and collaborate with data science and engineering teams to deploy and scale models and algorithms. Operationalize complex machine learning models into production including end to end deployment. Understand standard Machine Learning algorithms (Regression, Classification) & Natural Language processing concepts (sentiment generation, topic modeling, TFIDF) . Working knowledge of standard ML packages like scikit learn, vader sentiment, pandas, pyspark. Design, Develop and maintain adaptable data pipelines to maintain use case specific data. Integrate ML use cases in business pipelines & work closely with upstream & downstream teams to ensure smooth handshake of information. Develop & maintain pipelines to generate & publish model performance metrics that can be utilized by Model owners for Model Risk Oversight's model review cadence. Support the operationalized models and develop runbooks for maintenance. Flexibility to work from the ODC / office all days a week.

Posted 2 months ago

Apply

5 - 7 years

8 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: We are seeking an Experienced AI Developer with a minimum of 5 years of experience to join our team. The ideal candidate will be passionate about artificial intelligence and possess a strong background in developing cutting-edge AI solutions. Roles and Responsibilities: Designing, implementing, and optimizing machine learning algorithms and models to solve complex problems Utilizing deep learning frameworks such as TensorFlow or PyTorch for model development Collaborating with cross-functional teams to integrate AI capabilities into products and services Conducting research to stay abreast of the latest advancements in AI and machine learning. Testing and validating models to ensure accuracy and reliability Providing technical expertise and guidance on AI-related projects Contributing to the development of AI strategies and roadmaps Documenting code, algorithms, and methodologies for knowledge sharing and future reference Keeping up-to-date with industry trends and best practices in AI and machine learning Candidate Requirements: Experience: 5 to 7 yrs Proficiency in machine learning algorithms and techniques Experience with deep learning frameworks such as TensorFlow or PyTorch Strong programming skills in languages such as Python Understanding of data structures and algorithms Knowledge of Computer vision (CV) and GANs Familiarity with cloud platforms and services (e.g., AWS, Azure, GCP) Ability to work independently and collaboratively in a team environment Excellent problem-solving and analytical skills Strong communication skills to convey complex concepts to non-technical stakeholders

Posted 3 months ago

Apply

3 - 8 years

15 - 22 Lacs

Chennai, Bengaluru, Hyderabad

Hybrid

Naukri logo

Need Immediate/Serving NP candidate Must have skills- MLops,End to End deployment,Data pipeline,Python,Machine Learning Curious about the role? What your typical day would look like? We are looking for a Senior Analyst or Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 3+ years of experience with at least 1+ years of relevant DS experience. Proficient in a structured Python (Mandate) Proficient in any one of cloud technologies is mandatory ( AWS/ Azure/ GCP) Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Working knowledge of Spark/ PySpark is desirable. Model deployment / model monitoring experience is desirable. CI/CD pipeline creation is good to have.

Posted 3 months ago

Apply

6 - 8 years

15 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced professional in AI and machine learning with a strong focus on large language models (LLMs) for a 9-month project. The role involves hands-on expertise in developing and deploying agentic systems and working with transformer architectures, fine-tuning, prompt engineering, and task adaptation. Responsibilities include leveraging reinforcement learning or similar methods for goal-oriented autonomous systems, deploying models using MLOps practices, and managing large datasets in production environments. The ideal candidate should excel in Python, ML libraries (Hugging Face Transformers, TensorFlow, PyTorch), data engineering principles, and cloud platforms (AWS, GCP, Azure). Strong analytical and communication skills are essential to solve complex challenges and articulate insights to stakeholders.

Posted 3 months ago

Apply

5 - 8 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Experience in Azure OpenAI model and Azure stack Strong experience with prompt engineering and using large language models/generative AI services Passion for the engineering process required to train ML/AI models at scale in the cloud Strong knowledge of Python and data science libraries Strong knowledge of microservices frameworks and asynchronous processing libraries Experience implementing machine learning and deep learning solutions Strong data analysis skills Experience delivering NLP/NLU/NLQ solutions Experience building enterprise search solutions Nice to have: Good understanding of MLOps process, preferably using Azure ML service.

Posted 3 months ago

Apply

11 - 15 years

16 - 25 Lacs

Noida

Work from Office

Naukri logo

Title : LLM Ops Engineer Location : Noida Experience : 11-15 Years Description & Requirements : Position Summary : - LLMOps(Large language model operations) Engineer will play a pivotal role in building and maintaining the infrastructure and pipelines for our cutting-edge Generative AI applications, establishing efficient and scalable systems for LLM research, evaluation, training, and fine-tuning. - Engineer will be responsible for managing and optimizing large language models (LLMs) across various platforms. - This position is uniquely tailored for those who excel in crafting pipelines, cloud infrastructure, environments, and workflows. - Your expertise in automating and streamlining the ML lifecycle will be instrumental in ensuring the efficiency, scalability, and reliability of our Generative AI models and associated platform. - LLMOps engineer's expertise will ensure the smooth deployment, maintenance, and performance of these AI platforms and powerful large language models. - You will follow Site Reliability Engineering & MLOps principles and will be encouraged to contribute your own best practices and ideas to our ways of working. - Reporting to the Head of Cloud Native operations, you will be an experienced thought leader, and comfortable engaging senior managers and technologists. You will engage with clients, display technical leadership, and guide the creation of efficient and complex products/solutions. Key Responsibilities : Technical & Architectural Leadership : - Contribute to the technical delivery of projects, ensuring a high quality of work that adheres to best practices, brings innovative approaches and meets client expectations. Project types include following (but not limited to) : - Solution architecture, Proof of concepts (PoCs), MVP, design, develop, and implementation of ML/LLM pipelines for generative AI models, encompassing data ingestion, pre-processing, training, deployment, and monitoring. - Automate ML tasks across the model lifecycle. - Contribute to thought leadership across the Cloud Native domain with an expert understanding of advanced AI solutions using Large Language Models (LLM) & Natural Language Processing (NLP) techniques and partner technologies. - Collaborate with cross-functional teams to integrate LLM and NLP technologies into existing systems. - Ensure the highest levels of security and compliance are maintained in all ML and LLM operations. - Stay abreast of the latest developments in ML and LLM technologies and methodologies, integrating these innovations to enhance operational efficiency and model effectiveness. - Collaborate with global peers from partner ecosystems on joint technical projects. This partner ecosystem includes Google, Microsoft, AWS, IBM, Red Hat, Intel, Cisco, and Dell / VMware etc. Service Delivery : - Provide a technical hands-on contribution. Create scalable infra to support enterprise loads (distributed GPU compute, foundation models, orchestrating across multiple cloud vendors, etc.) - Ensuring the reliable and efficient platform operations. - Apply data science, machine learning, deep learning, and natural language processing methods to analyse, process, and improve the model's data and performance. - Create and optimize prompts and queries for retrieval augmented generation and prompt engineering techniques to enhance the model's capabilities and user experience w.r.t Operations & associated platforms. - Client-facing influence and guidance, engaging in consultative client discussions and performing a Trusted Advisor role. - Provide effective support to Sales and Delivery teams. - Support sales pursuits and enable revenue growth. - Define the modernization strategy for client platform and associated IT practices, create solution architecture and provide oversight of the client journey. Innovation & Initiative : - Always maintain hands-on technical credibility, keep in front of the industry, and be prepared to show and lead the way forward to others. - Engage in technical innovation and support position as an industry leader. - Actively contribute to sponsorship of leading industry bodies such as the CNCF and Linux Foundation. - Contribute to thought leadership by writing Whitepapers, blogs, and speaking at industry events. - Be a trusted, knowledgeable internal innovator driving success across our global workforce. Client Relationships : - Advise on best practices related to platform & Operations engineering and cloud native operations, run client briefings and workshops, and engage technical leaders in a strategic dialogue. - Develop and maintain strong relationships with client stakeholders. - Perform a Trusted Advisor role. - Contribute to technical projects with a strong focus on technical excellence and on-time delivery. Mandatory Skills & Experience : - Expertise in designing and optimizing machine-learning operations, with a preference for LLMOps. - Proficient in Data Science, Machine Learning, Python, SQL, Linux/Unix shell scripting. - Experience on Large Language Models and Natural Language Processing (NLP), and experience with researching, training, and fine-tuning LLMs. Contribute towards fine-tune Transformer models for optimal performance in NLP tasks, if required. - Implement and maintain automated testing and deployment processes for machine learning models w.r.t LLMOps. - Implement version control, CI/CD pipelines, and containerization techniques to streamline ML and LLM workflows. - Develop and maintain robust monitoring and alerting systems for generative AI models ensuring proactive identification and resolution of issues. - Research or engineering experience in deep learning with one or more of the following : generative models, segmentation, object detection, classification, model optimisations. - Experience implementing RAG frameworks as part of available-ready products. - Experience in setting up the infrastructure for the latest technology such as Kubernetes, Serverless, Containers, Microservices etc. - Experience in scripting / programming to automate deployments and testing, worked on tools like Terraform and Ansible. Scripting languages like Python, bash, YAML etc. - Experience on CI/CD opensource and enterprise tool sets such as Argo CD, Jenkins (others like Jenkins X, Circle CI, Argo CD, Tekton, Travis, Concourse an advantage). - Experience with the GitHub/DevOps Lifecycle - Experience in Observability solutions (Prometheus, EFK stacks, ELK stacks, Grafana, Dynatrace, AppDynamics) - Experience in at-least one of the clouds for example - Azure/AWS/GCP - Significant experience on microservices-based, container-based or similar modern approaches of applications and workloads. - You have exemplary verbal and written communication skills (English). Able to interact and influence at the highest level, you will be a confident presenter and speaker, able to command the respect of your audience. Desired Skills & Experience : - Bachelor level technical degree or equivalent experience; Computer Science, Data Science, or Engineering background preferred; Master's Degree desired. - Experience in LLMOps or related areas, such as DevOps, data engineering, or ML infrastructure. - Hands-on experience in deploying and managing machine learning and large language model pipelines in cloud platforms (e.g., AWS, Azure) for ML workloads. - Familiar with data science, machine learning, deep learning, and natural language processing concepts, tools, and libraries such as Python, TensorFlow, PyTorch, NLTK etc. - Experience in using retrieval augmented generation and prompt engineering techniques to improve the model's quality and diversity to improve operations efficiency. Proven experience in developing and fine-tuning Language Models (LLMs). - Stay up-to-date with the latest advancements in Generative AI, conduct research, and explore innovative techniques to improve model quality and efficiency. - The perfect candidate will already be working within a System Integrator, Consulting or Enterprise organisation with 8+ years of experience in a technical role within the Cloud domain. - Deep understanding of core practices including SRE, Agile, Scrum, XP and Domain Driven Design. Familiarity with the CNCF open-source community. - Enjoy working in a fast-paced and dynamic environment using the latest technologies

Posted 3 months ago

Apply

10 - 15 years

25 - 35 Lacs

Pune, Bengaluru, Gurgaon

Hybrid

Naukri logo

Dear, I hope you're doing well! We have an exciting opportunity at Xebia for a Data & AI Architect role, and I wanted to reach out to see if youd be interested. Role: Data & AI Architect Work Mode: Hybrid (3 days a week) across Xebia locations Notice Period: Immediate joiners or up to 30 days Job Description: We are seeking a highly skilled Data & AI Architect with expertise in Data Engineering, AI/ML, Generative AI, MLOps, and Cloud platforms (AWS, Azure, or Databricks). Key Skills Required: Data Ingestion, Transformation, Data Modeling Machine Learning (MLOps), AI, Generative AI, LLMs Infrastructure as Code (IaC) Terraform, CloudFormation Data Governance AWS DataZone, Unity Catalog, Azure Purview Experience with Snowflake, Redshift, and Databricks NLP, Computer Vision, Model Deployment & Monitoring How to Apply? If youre interested, kindly share your details along with your updated resume in the following format to vijay.s@xebia.com: Full Name: Total Experience (Must be 10+ years): Current CTC: Expected CTC: Current Location: Preferred Location: Notice Period / Last Working Day (if serving notice): Primary Skill Set (Choose from above or mention any other relevant expertise): Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia. If this role isnt the right fit for you but you know someone who would be a great match, feel free to share this opportunity with them!Looking forward to your response. Best Regards, Vijay S Assistant Manager - TAG https://www.linkedin.com/in/vijay-selvarajan/

Posted 3 months ago

Apply

3 - 6 years

20 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

JOB DESCRIPTION Designation: ML Operations Engineer Location: Hyderabad, India Work Mode: Office Reporting to: Principal Data Scientist Job Overview: At Foundation AI, As an ML Operations Engineer, you will design, develop, and maintain machine learning pipelines. You will work with structured and unstructured data. Your primary responsibility is to streamline the data science pipeline by automating the steps from Data gathering to Model deployment. Lifelong learning is crucial for long-term success, and we encourage you to stay current with the latest research by visiting conferences and sharing your knowledge throughout the enterprise. Responsibilities: You take responsibility for setting up maintainable and reliable ML pipelines on which our data scientists train the models. As part of an agile team, your ideas will be heard and impact the decision-making process. With our goal to invent for life, you will work on solutions that are both innovative and ethical. You will collaborate with ML engineers, data scientists, software developers, and DevOps engineers to have a real-world impact. Optimize the cost and latency of the services in production. Deploy the machine learning models in a scalable manner by utilizing the model-serving tools Skills and Tools: 3+ years of experience with Python, Linux skills, and machine learning principles 3+ years of experience in building and operating ML pipelines and data platforms in production 3+ years of experience in API design, distributed architectures, and orchestration of microservices Experience with container-based deployments (e.g. Docker, Kubernetes) Experience deploying deep learning models to a production environment Model lifecycle management (e.g. MLflow, KubeFlow) - 2+ years of experience in Workflow automation (e.g. Airflow) - 2+ years of experience in Version control of model files (like DVC) - 2+ years of experience in Exposure to Deep Learning frameworks, such as PyTorch, TensorFlow, etc - 2+ years of experience Experience working with the Cloud (AWS, Azure, GCP, etc.) - 2+ years of experience Personality and Working Practice: motivating attitude, profound communication, strong interpersonal skills, structured and analytical Experience in using Serving tools (RayServe, KServe) - 1 year(s) experience Experience in building and maintaining LLM pipelines - 1 year(s) experience Education: Bachelor's degree in Computer Science, Engineering, related field, or equivalent work experience. Our Commitment: At Foundation AI, we're committed to creating an inclusive and diverse workplace. We value equal opportunity and affirmative action principles, giving everyone an equal chance to succeed. We're dedicated to offering equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or veteran status. Upholding these values and adhering to applicable laws is paramount to us. For any feedback or inquiries, please contact us at careers@foundationai.com Learn more about us at www.foundationai.com

Posted 3 months ago

Apply

6 - 11 years

18 - 33 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description AI Architect with at least 5 years of experience designing and deploying enterprise-grade AI/ML solutions , specializing in Databricks, MosaicML, and Large Language Models (LLMs ). Expertise in building scalable AI architectures that integrate deep learning, natural language processing (NLP), and MLOps best practices to enable real-time decision-making and predictive insights. Proven ability to collaborate with cross-functional teams to operationalize AI models while ensuring data security, scalability, and performance in cloud environments. Key Qualifications AI/ML Architecture & Deployment: Extensive experience in designing and deploying end-to-end AI pipelines using Databricks and MosaicML. Expertise in model training, fine-tuning, and inferencing of LLMs, ensuring optimal performance and cost-efficiency in production environments. Databricks Expertise: Strong hands-on experience with Databricks for data processing, model training, and model serving. Proficient in using Databricks Workflows, MLflow, Delta Lake, and advanced SQL/Scala/PySpark for building scalable AI pipelines. MosaicML Proficiency: Proven expertise in utilizing MosaicML for fine-tuning and training foundation models, leveraging model compression and distributed training techniques to reduce latency and enhance model performance. LLM & NLP Expertise: In-depth knowledge of Large Language Models (LLMs), including OpenAI, GPT models, BERT, LLaMA, and custom fine-tuning techniques. Experienced in building conversational AI, semantic search, text classification, and other NLP-based applications. Model Optimization & MLOps: Experience in implementing MLOps pipelines with tools like MLflow, Kubeflow, and Airflow to automate model training, versioning, deployment, and monitoring. Skilled in optimizing model inference with techniques such as quantization, pruning, and distributed inferencing. Data Pipeline & Feature Engineering: Expert in designing feature stores and scalable data pipelines using Apache Spark, Databricks Delta, and AWS Glue to facilitate efficient feature generation, model retraining, and data versioning. Cloud AI/ML Infrastructure: Proficiency in AWS, Azure, and GCP with expertise in setting up AI environments using EC2, S3, EKS, Lambda, and SageMaker. Ability to design fault-tolerant and highly available AI/ML infrastructure on cloud platforms. AI Governance & Security: Experience in implementing AI governance frameworks, ensuring compliance with AI ethics, bias mitigation, and explainability. Knowledge of data privacy, GDPR, and model interpretability in enterprise environments. Preferred Experience Experience with RAG (Retrieval Augmented Generation) pipelines and semantic search Familiarity with vector databases such as Pinecone, FAISS, and Weaviate Exposure to LangChain, Hugging Face, and OpenAI APIs Experience in building custom AI/LLM solutions tailored to enterprise use cases Technical Skills AI/ML Platforms: Databricks, MosaicML, SageMaker, Azure ML, GCP AI Programming & Frameworks: Python, PyTorch, TensorFlow, Hugging Face, PySpark, SQL MLOps & Orchestration: MLflow, Kubeflow, Airflow, Docker, Kubernetes, CI/CD LLM & NLP Models: GPT-3/4, BERT, LLaMA, Falcon, BLOOM, T5, LangChain Databricks Tools: Databricks SQL, Delta Lake, AutoML, Unity Catalog Cloud Platforms: AWS (S3, EC2, Lambda), Azure, GCP Version Control & Automation: Git, Jenkins, Terraform, CloudFormation Certifications (Preferred but not mandatory) Databricks Certified Professional Data Engineer/AI Engineer MosaicML Certification in Model Training & Optimization AWS Certified Machine Learning Specialty Microsoft Certified: Azure AI Engineer Associate

Posted 3 months ago

Apply

1 - 6 years

25 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

Machine Learning Engineer - NLP & MLOps Experience: 1-4 Years Exp Salary : INR 25,00,000-33,00,000 / year Preferred Notice Period : Within 30 Days Shift : 9:00AM to 6:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Contractual Contract Duration: Full-Time, Indefinite Period (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : and Falcon, ChatGPT, LLAMA, LLM, MLOps, neural networks., Pytorch, TensorFlow, NLP, Python Good to have skills : Communication, AWS, Azure, Docker, GCP, Kubernetes Vujis (One of Uplers' Clients) is Looking for: Machine Learning Engineer/Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Were offering an exciting opportunity for an Machine Learning Engineer/Data Engineer to join our dynamic team at Vujis, a company focused on simplifying international trade. We are a data company working with some of the largest import/export and manufacturing companies in the Asia & Europe. Our data company helps manufacturing companies find new customers, analyze competitors, and source new suppliers internationaly. Key Responsibilities: Design, build, and maintain scalable data pipelines for diverse datasets. Develop and implement advanced Natural Language Processing (NLP) models and neural network architectures. Perform data cleaning, preprocessing, and augmentation to ensure high-quality inputs for machine learning models. Collaborate with cross-functional teams to deploy machine learning models in production environments, adhering to MLOps best practices. Stay updated with the latest advancements in large language models, including ChatGPT, LLAMA, and Falcon, and explore their applications. Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum of 1.5-2 years of professional experience in machine learning, with a focus on NLP and neural networks. Proficiency in Python and experience with machine learning libraries such as TensorFlow, PyTorch, or similar. Demonstrated experience in designing and managing scalable data pipelines. Strong understanding of MLOps principles and experience deploying models in production environments. Familiarity with large language models like ChatGPT, LLAMA, Falcon, etc. Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) for machine learning deployments. Knowledge of containerization and orchestration tools such as Docker and Kubernetes. Strong problem-solving abilities and excellent communication skills. Why work with us? Opportunity to work on groundbreaking projects in AI and machine learning. Mentorship & Learning Work closely 1-1 with the founders of the company. Career Growth There's room to be promoted. Friendly Culture Join a supportive team that values your input and contributions. Exciting Industry If you're interested in international trade & export which is very outdated and needs to be improved with tech. Attractive Salary We pay well for high performance individuals. If you're passionate about advancing NLP and machine learning technologies and meet the above qualifications, we'd love to hear from you. Engagement Type: Direct-hire Job Type: Permanent Location: Onsite ( Hyderabad) Working time: 9:00 AM to 6:00 PM IST Interview Process -R1 Cultural discussion R2 Technical interview with (Senior Data Scientist) How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Hiring Partner: Vujis is a cutting-edge platform that empowers exporters and manufacturing companies to find new customers and grow their business globally. Using advanced import-export data and AI-driven insights, Vujis helps businesses identify potential buyers, understand market trends, and gain a competitive edge in the international marketplace. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 months ago

Apply

2 - 7 years

0 - 3 Lacs

Chennai, Bengaluru, Hyderabad

Hybrid

Naukri logo

We are looking for a Senior Analyst or Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will. Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. Key Qualifications : 3+ years of experience with at least 1+ years of relevant DS experience. Proficient in a structured Python (Mandate). Proficient in any one of cloud technologies is mandatory ( AWS/ Azure/ GCP). Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Working knowledge of Spark/ PySpark is desirable. Model deployment / model monitoring experience is desirable. CI/CD pipeline creation is good to have. Excellent written and verbal communication skills.

Posted 3 months ago

Apply

12 - 16 years

17 - 25 Lacs

Noida

Work from Office

Naukri logo

Title : AI Architect (Google) Seniority : Senior Experience : 12-16 Location : Noida Description & Requirements : Position Summary : The Senior AI Architect is responsible for designing and implementing advanced AI and cloud architectures, with a strong emphasis on integrating generative AI technologies using Google Cloud Platform (GCP). This role requires a broad understanding of AI and cloud platforms, along with the ability to engage with customers on a variety of architectural topics in both cloud and data center environments. The ideal candidate is passionate about GenAI and AI technologies, stays current with industry trends, and drives innovation within the organization and for clients. As a senior leader, you will interact frequently with customers, provide expert opinions, and contribute to strategic vision. Key Responsibilities : Technical & Engineering Leadership : - Design and implement AI and cloud architectures, integrating GCP GenAI technologies like Gemini, GPT-4o, and Gemma to enhance functionality and scalability. - Lead architectural discussions with clients, providing expert guidance on best practices for AI and cloud integration using GCP. - Ensure solutions align with microservice and container-based environments across public, private, and hybrid clouds using Google Kubernetes Engine (GKE). - Contribute to thought leadership in the Cloud Native domain with a strong understanding of GCP technologies. - Collaborate on technical projects with global partners, leveraging GCP's extensive capabilities. Service Delivery & Innovation : - Develop GenAI solutions from ideation to MVP using GCP services such as Vertex AI, ensuring high performance and reliability within cloud-native frameworks. - Optimize AI and cloud architectures on GCP to meet client requirements, balancing efficiency and effectiveness. - Evaluate existing complex solutions and recommend architectural improvements to transform applications with GCP's cloud-native/12-factor characteristics. - Promote the adoption of GCP GenAI technologies within cloud-native projects, driving initiatives that push the boundaries of AI integration in GCP services. Thought Leadership and Client Engagement : - Provide architectural guidance to clients on incorporating GenAI and machine learning into their GCP cloud-native applications and architectures. - Conduct workshops, briefings, and strategic dialogues to educate clients on AI benefits and applications, building strong, trust-based relationships. - Act as a trusted advisor, contributing to technical projects (PoCs and MVPs) with a focus on technical excellence and on-time delivery. - Author whitepapers, blogs, and speak at industry events, maintaining a visible presence as a thought leader in AI and cloud architecture. - Create and record videos to share insights and opinions on AI and cloud technologies, enhancing industry leadership. Collaboration and Multi-Customer Management : - Engage with multiple customers simultaneously, providing high-impact architectural consultations and fostering strong relationships. - Work closely with internal teams and global partners to ensure seamless collaboration and knowledge sharing across projects. - Maintain a hands-on technical credibility, staying ahead of industry trends and mentoring others in the organization. Mandatory Skills & Experience : - Experience: 8+ years in cloud and AI architecture design, 5+ years in software development. - Technologies: Proficiency in Python, Java (and/or Golang), and Spring; expertise in Google Cloud Platform; Google Kubernetes Engine (GKE) and containerization. - AI Expertise: Advanced machine learning algorithms, GenAI models (e.g., Gemini, GPT-4o, Gemma), NLP techniques, Vector Databases, Fine-Tuning and GCP Vertex AI components. - Experience with GCP's AI tools such as Vertex AI, TensorFlow, and AutoML is required - Big Data : Experience with Big Query, Google Cloud Dataflow, and Google Cloud Storage. Desired Skills & Experience : - Deep knowledge of machine learning operations (MLOps) and experience in deploying, monitoring, and maintaining AI models in production environments on GCP. - Proficiency in data engineering for AI, including data preprocessing, feature engineering, and pipeline creation using Google Cloud Dataflow and BigQuery ML. - Expertise in AI model fine-tuning and evaluation, with a focus on improving performance for specialized tasks. - Knowledgeable about AI ethics and bias mitigation, with experience in implementing strategies to ensure fair and unbiased AI solutions. - Serverless Computing and Distributed Systems on GCP. - Deep Learning Frameworks (TensorFlow, PyTorch) integrated with GCP AI Platform. - Innovation and Emerging Technology Trends. - Strategic AI Vision and Road mapping. - Enthusiastic about working in a fast-paced environment using the latest technologies, and passionate about dynamic and high-energy Labs atmosphere. Verifiable Certification : At least two recognized cloud professional certifications : one must be from Google Cloud Platform (e.g., Google Professional Cloud Architect), and another can be from Google or a relevant AI/cloud provider (e.g., Microsoft Azure, AWS). Mohsen's suggestion : At least two recognized cloud professional certifications: one must be from Google Cloud Platform (e.g., Google Professional Cloud Architect) or demonstrated experience running GCP-based AI projects in production. Another certification can be from Google or a relevant AI/cloud provider (e.g., Microsoft Azure, AWS). Soft Skills and Behavioral Competencies : - Exemplary communication and leadership skills, capable of inspiring teams and making strategic decisions that align with business goals. - Demonstrates a strong customer orientation, innovative problem-solving abilities, and effective cross-cultural collaboration. - Adept at driving organizational change and fostering a culture of innovation.

Posted 3 months ago

Apply

6 - 11 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for a Cloud DevOps Engineer to build, automate, and scale our product, which is a multi-cloud, event-driven AI system for real-time anomaly detection and alerting. You will be responsible for cloud infrastructure, Kubernetes, automation, CI/CD, and observability, ensuring high availability and performance. While this is a DevOps-heavy role, knowledge of event streaming, real-time data processing, and ML pipeline automation will be a plus. Key Responsibilities Building, automating, and maintaining robust and scalable cloud infrastructure, with a strong emphasis on automation, security, and observability. 1. Infrastructure & Cloud Automation: Multi-Cloud Management: Design, implement, and manage infrastructure across various cloud platforms (e.g., AWS, Azure, GCP) and on-premises environments. IAC Implementation: Utilize Infrastructure-as-Code (IaC) principles to automate infrastructure provisioning and management using Terraform or Pulumi. Configuration Management: Automate system configuration and application deployments using Ansible (or similar configuration management tools) to ensure consistency and reliability. Kubernetes Optimization: Optimize the performance and cost-effectiveness of both cloud-based (EKS, AKS, GKE) or self-hosted Kubernetes clusters. 2. CI/CD & Deployment Automation: Design and implement GitOps workflows using tools like Jenkins, and GitHub Actions to streamline application deployments. Containerized Deployment: Automate the deployment of containerized applications across various Kubernetes distributions (EKS, AKS, GKE). API Management: Manage API gateways, service discovery, and ingress controllers using technologies like Nginx. 3. Event-Driven & Real-Time Processing (Supporting Role): Real-Time Data Streaming: Deploy, manage, and maintain Kafka-based solutions (Apache Kafka, Confluent, MSK, Event Hubs, Pub/Sub) for real-time data ingestion and processing. Database Optimization: Optimize the performance and scalability of various open-source and cloud-based databases (PostgreSQL, TimescaleDB, Elasticsearch, Influx DB etc). MLOps Support: Support Machine Learning (ML) workloads by integrating Kubeflow, MLflow, Ray, Seldon, Sage maker, Vertex AI, and Azure ML with CI/CD pipelines. 4. Observability & Security: Monitoring & Logging: Implement comprehensive monitoring and logging solutions using tools like Prometheus, Grafana, Open Telemetry, ELK, Loki, and Datadog to proactively identify and resolve issues. Security Implementation: Implement robust security measures, including RBAC, IAM, and Secrets Management (Vault, Sealed Secrets, AWS KMS, Azure Key Vault) to protect sensitive data and infrastructure. Compliance & Best Practices: Ensure adherence to SOC2, GDPR, and other relevant compliance standards, while implementing security best practices across all cloud and open-source technologies. Skills & Qualifications: Must-Have: Strong expertise in Cloud DevOps (AWS, Azure, GCP) and open-source infrastructure. Experience with Terraform, Kubernetes (EKS, AKS, GKE, OpenShift, K3s), Docker and CI/CD automation. Hands-on experience with Kafka, RabbitMQ, or event-driven microservices. Observability tools (Prometheus, Open Telemetry, Grafana, ELK). Strong Python or scripting for automation. Good-to-Have (Data & AI Integration): Knowledge of real-time data pipelines (Kafka, CDC, Flink, Spark Streaming, Debezium). Experience integrating MLOps tools (Kubeflow, MLflow, DASK, Ray, Sage maker, Vertex AI, Azure ML). Exposure to serverless computing (Lambda, Cloud Functions, Azure Functions).

Posted 3 months ago

Apply

6 - 8 years

18 - 30 Lacs

Bengaluru, Hyderabad

Hybrid

Naukri logo

Curious about the role? What your typical day would look like? 6+ years of relevant DS experience Proficient in a structured Python Proficient in any one of cloud technologies is mandatory (AWS/ Azure) Follows good software engineering practices and has an interest in building reliable and robust software Good understanding of DS concepts and DS model lifecycle Working knowledge of Linux or Unix environments ideally in a cloud environment Working knowledge of Spark/ PySpark is desirable Model deployment / model monitoring experience is mandatory CI/CD pipeline creation is good to have Excellent written and verbal communication skills B.Tech from Tier-1 college / M.S or M. Tech is preferred

Posted 3 months ago

Apply

5 - 8 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We PrecisionTech Global IT Solutions LLP are Hiring for MLOPS Engineer as Permanent Position Years of Experience: 5-8 Years Notice Period: Only Immediate Joiners Preferred Work location: Bengaluru Role & responsibilities Experiment Tracking & Model Management - Weights & Biases [Preferred] - MLflow - TensorBoard - DVC (Data Version Control) - Comet.ml ## AWS Services - SageMaker [Required] - EKS (Elastic Kubernetes Service) [Required] - EC2 (Elastic Compute Cloud) [Required] - S3 (Simple Storage Service) [Required] - IAM (Identity and Access Management) [Required] - CloudWatch [Required] - ECR (Elastic Container Registry) [Required] - Amazon EFS [Optional] Preferred candidate profile MLOPS CI/CD, Python AWS Perks and benefits

Posted 3 months ago

Apply

4 - 9 years

6 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, develop, and maintain robust AWS cloud infrastructure to support large-scale applications. Architect and implement scalable solutions using AWS services such as EC2, S3, Lambda, RDS, IAM, VPC, CloudFormation, and more. MLOps: Implement and manage MLOps pipelines to streamline the deployment and monitoring of machine learning models. Automate model training, validation, deployment, and monitoring processes to ensure efficiency and reliability. Large Language Models (LLM): Utilize expertise in LLM to enhance our AI-driven solutions, including fine-tuning and optimizing pre-trained models. DevOps Capabilities: Develop and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, CircleCI, or AWS CodePipeline. Implement infrastructure as code (IaC) practices using Terraform, CloudFormation, or AWS CDK. Ensure the highest levels of security, scalability, and performance in all cloud deployments. Monitor and manage the health, performance, and security of cloud environments using AWS CloudWatch, ELK stack (Elasticsearch, Logstash, Kibana), or Prometheus & Grafana. Mentor and guide junior engineers, fostering a culture of continuous learning and improvement.

Posted 3 months ago

Apply

4 - 9 years

18 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

Hi Techies , Wishes from GSN !!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring PYTHON DEVELOPER(AIML) for one of our leading MNC client. PFB the details for your better understanding: 1. WORK LOCATION: Bangalore 2. Job Role: Python Developer 3. EXPERIENCE: 4 -15 yrs 4. CTC Range: 18 LPA - 30LPA 5. Work Type: Hybrid ****** Looking for SHORT JOINERs ****** Job Description: Technical Skills - Must Have • Expertise in Python : Design, develop, and maintain robust and scalable back-end APIs using Python and relevant frameworks. • Implement efficient algorithms and data structures to handle complex calculations and data processing. • Integrate with various databases and third-party APIs for data access and functionality. • Ensure code quality through unit testing, code reviews, and best practices. • Analyze and understand raw data & Develop data pipelines to wrangle data • ML model development and algorithm design . • Interpret trends and patterns, visualization tool, python graph libs. • Good knowledge of python , DS, Pandas, Scykit, NumPy, TensorFlow, keras, etc. • Open AI, GCP - AI services (vertex AI, etc.), Azure - AI services, Aws- AI services, MLops, NLP. • Contribute to CI/CD pipelines for automated deployments and testing using Kubernetes. ****** Looking for SHORT JOINERs ****** Thanks & Regards, Shobana GSN Consulting Email: shobana@gsnhr.net Web: www.gsnhr.net Google Review : https://g.co/kgs/UAsF9W

Posted 3 months ago

Apply

5 - 10 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

MLOps Engineer Bangalore (work at office) 5+ Years Mandatory Skills: Strong proficiency in Generative AI, Large Language Models (LLMs), deep learning, agentic frameworks, and RAG setup. Experience in designing and implementing machine learning models using scikit-learn and TensorFlow. Hands-on expertise with AI/ML frameworks such as Hugging Face, LangChain, LangGraph, and PyTorch. Cloud AI services experience, particularly in AWS SageMaker and AWS Bedrock. MLOps & DevOps: Knowledge of data pipeline setup, Apache Airflow, CI/CD, and containerization (Docker, Kubernetes). API Development: Ability to develop and maintain APIs following RESTful principles. Technical proficiency in Elastic, Python, YAML, and system integrations. Nice to Have: Experience with Observability, Ansible, Terraform, Git, Microservices, AIOps, and scripting in Python. Familiarity with AI cloud services such as Azure OpenAI and Google Vertex AI.

Posted 3 months ago

Apply

5 - 10 years

30 - 37 Lacs

Mohali, Gurgaon

Work from Office

Naukri logo

Role & Responsibilities Lead the development and implementation of machine learning models to solve complex business problems. Collaborate with cross-functional teams to gather requirements, understand business objectives, and translate them into technical solutions. Design and architect scalable and efficient machine learning systems. Conduct exploratory data analysis, feature engineering, and model evaluation. Stay updated with the latest advancements in machine learning research and technologies. Mentor and provide guidance to junior team members. Drive best practices in coding, testing, and documentation. Preferred Candidate Profile Bachelors or Masters degree in Computer Science, Statistics, Mathematics, or a related field. Proven experience as a Machine Learning Engineer or similar role. Proficiency in programming languages: Python, R, Java, Scala. Strong knowledge of machine learning frameworks such as TensorFlow, PyTorch, and scikit-learn. Solid understanding of data structures, data modeling, and software architecture principles. Excellent grasp of statistics and mathematics. Experience with MLOps and MLaaS is highly advantageous. Strong problem-solving skills and attention to detail. Excellent communication and interpersonal abilities. Why Join Us? Experience the freedom to focus on meaningful and impactful tasks. Enjoy a flexible and supportive work environment. Maintain a healthy work-life balance. Thrive in a self-accountable culture with minimal micromanagement. Collaborate with a motivated and goal-oriented team. Work alongside an approachable, supportive, and highly skilled management team. Benefit from a competitive compensation package. Stay ahead of the curve by working with emerging, cutting-edge technologies. See your work make a tangible impact on the lives of millions of customers.

Posted 3 months ago

Apply

3 - 8 years

15 - 25 Lacs

Chennai, Mumbai, Bengaluru

Hybrid

Naukri logo

ML Ops Engineer Skills: Expert in Python programming with experience in libraries like NumPy, Pandas etc. Experience in API deployment using frameworks like FastAPI, Flask, Django, Tornado, bottle etc. Total of 3-7 years of experience in managing machine learning projects end-to-end, with last project focused on MLOps. Experience in supporting model builds and model deployment for IDE-based models and autoML tools, experiment tracking, model management, version tracking & model training (Dataiku, Datarobot, Kubeflow, MLflow, neptune.ai), model hyperparameter optimization, model evaluation, and explainability (SHAP, Tensorboard).ss Experience with one of container technologies (Docker, Kubernetes, EKS, ECS). Experience with multiple cloud providers (AWS, GCP, Azure, etc), preferred GCP. Experience with MLOps tools such as ModelDB, Kubeflow, Pachyderm, and Data Version Control (DVC). Monitoring Build & Production systems using automated monitoring and alarm tools. Knowledge of machine learning frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn. MLOps Engineer Responsibilities: Deploying and operationalizing MLOps. Model evaluation and explainability. Model training and automated retraining. Model workflows from onboarding, operations to decommissioning. Model version tracking & governance. Data archival & version management. Model hyperparameter optimization (Good to have). Model and drift monitoring. Creating and using benchmarks, metrics, and monitoring to measure and improve services. Providing best practices and executing POC for automated and efficient model operations at scale. Designing and developing scalable MLOps frameworks to support models based on client requirements. Desirable Education: Deep quantitative/programming background with a degree (Bachelors, Master’s, or Ph.D.) in a highly analytical discipline, like Statistics, Economics, Computer Science, Mathematics, Operations Research, etc.

Posted 3 months ago

Apply

5 - 8 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Experiment Tracking & Model Management - Weights & Biases [Preferred] - MLflow - TensorBoard - DVC (Data Version Control) - Comet.ml ## AWS Services - SageMaker [Required] - EKS (Elastic Kubernetes Service) [Required] - EC2 (Elastic Compute Cloud) [Required] - S3 (Simple Storage Service) [Required] - IAM (Identity and Access Management) [Required] - CloudWatch [Required] - ECR (Elastic Container Registry) [Required] - Amazon EFS [Optional] Preferred candidate profile MLOPS CI/CD, Python AWS Perks and benefits

Posted 3 months ago

Apply

8 - 13 years

35 - 45 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Hiring Architects 1. Gen AI Architects 2. MLOps Architects MLOps+ DevOps MLOps+ ML Location: Mumbai, Pune Bengaluru Years: 8-14 years of exp Notice Period: immediate- 30 days.

Posted 3 months ago

Apply

3 - 8 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

• Build, validate, and maintain predictive risk models for the retail and Bharat banking portfolios, focusing on customer behaviour and marketing risk stages. • Leverage advanced statistical techniques, machine learning algorithms, and alternative data sources to enhance model performance. • Collaborate closely with business units, data engineers, and model validators to ensure timely model development and implementation. • Ensure model compliance with regulatory requirements and internal governance standards. • Analyse large datasets using Python and PySpark, ensuring accuracy and scalability of models.

Posted 3 months ago

Apply

Exploring MLOps Jobs in India

MLOps, a combination of machine learning and operations, is a rapidly growing field in India. As companies continue to invest in artificial intelligence and machine learning technologies, the demand for MLOps professionals is on the rise. Job seekers looking to enter this field will find a variety of opportunities across different industries in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for MLOps roles: 1. Bangalore 2. Mumbai 3. Hyderabad 4. Pune 5. Delhi

Average Salary Range

The salary range for MLOps professionals in India can vary based on experience and location. On average, entry-level MLOps professionals can expect to earn around INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career progression in MLOps may look like this: - Junior MLOps Engineer - MLOps Engineer - Senior MLOps Engineer - MLOps Architect - MLOps Manager

Related Skills

In addition to MLOps skills, professionals in this field are often expected to have knowledge of: - Machine Learning - Data Engineering - Cloud Computing - Python programming - DevOps

Interview Questions

Here are 25 interview questions for MLOps roles: - What is the difference between machine learning and deep learning? (basic) - Explain the concept of model deployment in MLOps. (medium) - How do you handle data drift in a machine learning model? (medium) - What is Docker, and how is it used in MLOps? (basic) - What is the purpose of version control in MLOps? (basic) - Describe your experience with CI/CD pipelines in MLOps. (medium) - How do you monitor the performance of a machine learning model in production? (medium) - What is Kubernetes, and how is it related to MLOps? (medium) - Explain the concept of hyperparameter tuning. (medium) - How do you ensure model reproducibility in MLOps? (advanced) - What is the difference between batch inference and real-time inference? (medium) - How do you handle model retraining in MLOps? (medium) - Describe a time when you had to troubleshoot a machine learning model in production. (medium) - What is the role of data governance in MLOps? (medium) - How do you ensure model security in MLOps? (medium) - Explain the concept of A/B testing in the context of MLOps. (medium) - What are the key components of a machine learning pipeline? (basic) - How do you manage model versioning in MLOps? (medium) - Describe your experience with monitoring and logging in MLOps. (medium) - What is the purpose of artifact management in MLOps? (basic) - How do you handle scalability in machine learning systems? (medium) - What is the difference between supervised and unsupervised learning? (basic) - Explain the concept of bias and variance in machine learning models. (medium) - How do you ensure data quality in MLOps? (medium) - Describe a successful MLOps project you have worked on. (advanced)

Closing Remark

As the demand for MLOps professionals continues to grow in India, now is a great time to explore opportunities in this field. By honing your skills, gaining relevant experience, and preparing for interviews, you can position yourself for a successful career in MLOps. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies