Jobs
Interviews

1845 Mlflow Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

19 - 25 Lacs

Hyderabad, Bengaluru

Work from Office

Job Overview The Applied AI Solutions Architect is a strategic role responsible for designing, implementing, and managing AI-driven solutions that align with business objectives. This position bridges technical expertise in artificial intelligence (AI) and machine learning (ML) with business strategy, ensuring scalable, ethical, and high-performing AI systems. The AI Solutions Architect collaborates with cross-functional teams to deliver innovative solutions, leveraging generative AI, cloud platforms, and modern architectures like Retrieval-Augmented Generation (RAG). Responsibilities Solution Design : Architect end-to-end AI/ML pipelines, including data ingestion, preprocessing, model training, deployment, and monitoring, ensuring scalability and performance. Technology Selection : Evaluate and select appropriate AI frameworks, tools, and cloud services (e.g., AWS SageMaker, Azure AI, Google Cloud AI) based on project requirements. Generative AI Implementation : Design solutions using large language models (LLMs) and RAG architectures for applications like content generation, customer engagement, or product design. Collaboration : Work with data scientists, engineers, product managers, and executives to translate business needs into technical solutions, acting as a trusted advisor. Governance and Ethics : Implement responsible AI practices, addressing bias, security, and compliance in AI systems. MLOps and AIOps : Establish CI/CD pipelines, model versioning, and monitoring frameworks to operationalize AI solutions. Thought Leadership : Advocate for AI-driven innovation, mentor teams, and communicate technical concepts to non-technical stakeholders. Performance Optimization : Ensure AI solutions meet latency, cost, and quality requirements, optimizing for production environments. Skill Set Technical Skills : Proficiency in Python, R, or Julia for AI/ML development. Expertise in ML frameworks like TensorFlow, PyTorch, Hugging Face, or Scikit-learn. Experience with cloud platforms (AWS, Azure, Google Cloud) and AI services like Amazon Bedrock or Azure AI Foundry. Knowledge of MLOps tools (e.g., Kubeflow, MLflow) and CI/CD pipelines. Familiarity with generative AI techniques, including prompt engineering, fine-tuning, and RAG. Understanding of data engineering concepts, including ETL processes and data lakes. Soft Skills : Strong communication to bridge technical and business teams. Analytical thinking for evaluating trade-offs and designing optimal solutions. Leadership and mentorship to guide cross-functional teams. Domain Knowledge : Experience in industries like healthcare, finance, or technology, with an understanding of relevant use cases (e.g., drug discovery, personalized marketing). Certifications AWS Certified Machine Learning Specialty Microsoft Azure AI Engineer Associate Google Cloud Professional Machine Learning Engineer Coursera or Edureka AI/ML certifications (e.g., DeepLearning.AIs Generative AI Specialization) ITIL or TOGAF for enterprise architecture alignment (optional) Qualifications Bachelors or Masters degree in Computer Science, Data Science, or a related field. 8+ years of experience in ML engineering, data science, or software architecture, with 3+ years in AI/ML solution design. Proven track record of deploying AI solutions in production environments

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

We have an urgent and high-priority requirement for a Java Developer + AI/ML 🔍 Position Details • Location – Hyderabad • Hybrid work model – 12 days from Hyderabad Infosys office every month and rest of the days work from home • Immediate Joiners only • Long term opportunity if performance is good • Mode of Interview – In person in Infosys Hyderabad Office. Job Description: • Java + AI/ML role required with at least 5+ years of industry experience on Java, Spring Boot, Spring Data & at least 2 years of AI/ML project / professional experience. • Strong experience in building and consuming REST APIs and asynchronous messaging (Kafka/RabbitMQ). • Working experience in integrating AI/ML models into Java services or calling external ML endpoints (REST/gRPC). • Understanding of ML lifecycle: training, validation, inference, monitoring, and retraining. • Familiarity with tools like TensorFlow, PyTorch, Scikit-Learn, or ONNX. • Prior experience in domain-specific ML implementations (e.g., fraud detection, recommendation systems, NLP chatbots) • Experience working with data formats like JSON, Parquet, Avro, and CSV. • Solid understanding of database systems – both SQL (PostgreSQL, MySQL) and NoSQL (Redis). • Integrate machine learning models (batch and real-time) into backend systems and APIs. • Optimize and automate AI/ML workflows using MLOps best practices. • Monitor and manage model performance, versioning, and rollbacks. • Collaborate with cross-functional teams (DevOps, SRE, Product Engineering) to ensure seamless deployment. • Exposure to MLOps tools like MLflow, Kubeflow, or Seldon. • Experience with any 1 of the cloud platforms, preferably AWS & Knowledge of observability tools & its metrics, events, logs, and traces (for e.g., Prometheus, Grafana, Open Telemetry, Splunk, Data Dog, App Dynamics, etc..). Thanks, and Regards Snehil Mishra snehil@ ampstek.com linkedin.com/in/snehil-mishra-1104b2154 Desk-6093602673Extension-125 www.ampstek.com https://www.linkedin.com/company/ampstek/jobs/ Ampstek – Global IT Partner Registered Offices: North America and LATM: USA|Canada|Costa Rica|Mexico Europe:UK|Germany|France|Sweden|Denmark|Austria|Belgium|Netherlands|Romania|Poland|Czeh Republic|Bulgaria|Hungary|Ireland|Norway|Croatia|Slovakia|Portugal|Spain|Italy|Switzerland|Malta| Portugal APAC:Australia|NZ|Singapore|Malaysia|South Korea|Hong Kong|Taiwan|Phillipines|Vietnam|Srilanka|India MEA :South Africa|UAE|Turkey|Egypt

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models Programming Skills - Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills

Posted 1 week ago

Apply

12.0 years

0 Lacs

Greater Chennai Area

On-site

Job Description The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Senior Manager who is passionate about and skilled in designing and developing robust, scalable data models. This role focuses on optimizing the consumption of data sources to generate unique insights from Pfizer’s extensive data ecosystems. A strong technical design and development background is essential to ensure effective collaboration with engineering and developer team members. As a Senior Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in designing and building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. Your expertise will be pivotal in creating and maintaining the data capabilities that enables advanced analytics and data-driven decision-making. In this role, you will work closely with stakeholders to understand their needs and collaborate with them to create end-to-end data solutions. This process starts with designing data models and pipelines and establishing robust CI/CD procedures. You will work with complex and advanced data environments, design and implement the right architecture to build reusable data products and solutions, and support various analytics use cases, including business reporting, production data pipelines, machine learning, optimization models, statistical models, and simulations. As the Data Solutions Engineering Senior Manager, you will develop sound data quality and integrity standards and controls. You will enable data engineering communities with standard protocols to validate and cleanse data, resolve data anomalies, implement data quality checks, and conduct system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Designing and developing data pipelines & complex data workflows. Create standard procedures to ensure efficient CI/CD. Responsible for project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Technical issue debugging and resolutions. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering, or management information science. Master’s degree is preferred. 12 to 16 years of combined data warehouse/data lake experience as a data lake/warehouse developer or data engineer. 12 to 16 years in developing data product and data features in servicing analytics and AI use cases. Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 9+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc.) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 9+ years of hands-on experience designing and delivering data lake/data warehousing projects. Minimal of 5 years in hands on design of data models. Proven ability to effectively assist the team in resolving technical issues. Proficient in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech

Posted 1 week ago

Apply

0.0 - 2.0 years

3 - 10 Lacs

Kolkata, West Bengal

Remote

Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

We are looking for a skilled and passionate AI/ML Engineer to join our team and help us build intelligent systems that leverage machine learning and artificial intelligence. You will design, develop, and deploy machine learning models, work closely with cross-functional teams, and contribute to cutting-edge solutions that solve real-world problems. Key Responsibilities Design and implement machine learning models and algorithms for various use cases (e.g., prediction, classification, NLP, computer vision). Analyze and preprocess large datasets to build robust training pipelines. Conduct research to stay up-to-date with the latest AI/ML advancements and integrate relevant techniques into projects. Train, fine-tune, and optimize models for performance and scalability. Deploy models to production using tools such as Docker, Kubernetes, or cloud services (AWS, GCP, Azure). Collaborate with software engineers, data scientists, and product teams to integrate AI/ML solutions into applications. Monitor model performance in production and continuously iterate for improvements. Document design choices, code, and models for transparency and reproducibility. Preferred Qualifications Experience with NLP libraries (e.g., Hugging Face Transformers, spaCy) or computer vision tools (e.g., OpenCV). Background in deep learning architectures such as CNNs, RNNs, GANs, or transformer models. Knowledge of MLOps practices and tools (e.g., MLflow, Kubeflow, SageMaker). Contributions to open-source AI/ML projects or publications in relevant conferences/journals.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

About the Role: We are seeking a highly skilled and principled Responsible AI Evaluator to assess, audit, and ensure the ethical development and deployment of AI models across the enterprise. This role spans traditional ML models, Large Language Models (LLMs), and Generative AI systems, with a strong focus on fairness, transparency, privacy, security, and accountability across multi-cloud environments (AWS, Azure, GCP, etc.). The ideal candidate combines deep technical expertise with a firm grasp of ethical frameworks, regulatory requirements, and responsible AI principles. Key Responsibilities: Model Evaluation & Auditing Evaluate AI/ML, LLM, and GenAI models for bias, fairness, explainability, and robustness. Conduct red-teaming, safety testing, and adversarial analysis across AI pipelines. Perform pre-deployment, post-deployment, and continuous model audits. Responsible AI Frameworks Operationalize and implement RAI principles based on NIST AI RMF, OECD, ISO/IEC 42001, and industry standards. Collaborate with data scientists, legal, compliance, and product teams to enforce responsible AI lifecycle checkpoints. Multi-Cloud Deployment Compliance Ensure AI/ML systems deployed across AWS, Azure, GCP, and hybrid environments meet data residency, compliance, and ethical standards. Evaluate model behavior variability and risk exposure across different cloud providers. Tooling & Automation Build or integrate tools for model explainability (e.g., SHAP, LIME), fairness (AIF360, Fairlearn), and privacy (DP libraries). Leverage LLM-specific eval frameworks (e.g., TruthfulQA, HELM, RAGAS, model-card generators). Documentation & Reporting Produce model cards, system cards, and impact assessments (e.g., DSA, DPIA). Deliver detailed evaluation reports, risk scores, and mitigation recommendations for internal and external stakeholders. Required Qualifications: 5+ years of experience in AI/ML evaluation, testing, or assurance roles. Proven experience with LLMs (e.g., GPT, Claude, LLaMA) and generative AI (e.g., DALL·E, Midjourney, custom diffusion models). Strong knowledge of fairness, bias detection, differential privacy, red-teaming, model transparency techniques. Familiarity with AI regulation and governance (EU AI Act, FTC guidance, DSA, HIPAA, etc.). Experience with cloud platforms (AWS, Azure, GCP) and deploying/evaluating AI services in distributed architectures. Proficiency in Python and libraries such as scikit-learn, Transformers, LangChain, MLflow, PyTorch, etc. Strong communication and reporting skills, with the ability to translate technical findings into business risk language. Preferred Qualifications: Master’s or PhD in Computer Science, Data Science, Ethics, or related field. Prior work with model evaluation benchmarks and large-scale LLM testing frameworks. Knowledge of secure and privacy-preserving ML techniques (e.g., federated learning, homomorphic encryption). Experience contributing to open-source Responsible AI tools or policy frameworks. Why Join Us: Work at the cutting edge of AI ethics, safety, and trust. Collaborate with interdisciplinary teams across engineering, legal, and policy. Influence how AI is responsibly used across industries and sectors. Shape a safer and more accountable AI future.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Engineer – Python (LLM, RAG, LangChain, LlamaIndex) Location: Pune Experience: 1-2 Years Employment Type: Full-time Working Mode : Work from office Working Day : Monday to Friday About The Role We are looking for a highly skilled AI/ML Engineer with strong Python expertise and hands-on experience in developing LLM-based applications. The ideal candidate should be proficient with Retrieval-Augmented Generation (RAG), LangChain, and LlamaIndex, and be passionate about building cutting-edge AI systems that combine machine learning with advanced language models. Key Responsibilities Design, develop, and deploy LLM-powered applications using frameworks like LangChain and LlamaIndex. Implement and optimize RAG pipelines to enable efficient and scalable retrieval and generation workflows. Fine-tune and integrate large language models (LLMs) such as GPT, LLaMA, Mistral, etc. Build and maintain vector databases (e.g., FAISS, Pinecone, Chroma) for semantic search and context augmentation. Work with unstructured data (documents, PDFs, web data) to extract and process meaningful content for use in AI applications. Collaborate with data scientists, product teams, and backend engineers to deploy AI features into scalable production systems. Monitor performance of deployed models and pipelines; continuously improve based on feedback and usage analytics. Stay updated with the latest developments in generative AI, prompt engineering, and open-source LLM tools. Required Skills & Experience 1+ years of hands-on experience in Python development for AI/ML applications. Solid understanding of large language models, prompt engineering, and model fine-tuning. Proficient with LangChain and LlamaIndex for building composable LLM workflows. Experience building RAG systems, including retrieval, chunking, embedding, and context injection. Familiarity with vector stores such as FAISS, Chroma, Weaviate, or Pinecone. Experience with Hugging Face Transformers, OpenAI, or similar model providers. Good understanding of REST APIs, microservices architecture, and deploying AI services in production. Nice To Have Exposure to cloud platforms (AWS, Azure, or GCP) and managed AI/ML services. Experience with LangGraph, AutoGen, or similar agent orchestration frameworks. Working knowledge of MLOps tools like MLflow, DVC, or Kubeflow. Understanding of data privacy, security, and ethical AI principles. Computer Vision Qualification: BE/Btech/MCA/MSC Benefits Work with state-of-the-art AI and LLM technologies Access to training, certifications, and conferences in AI/ML Collaborative and innovative work culture

Posted 1 week ago

Apply

7.0 years

1 - 9 Lacs

Bengaluru

On-site

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Software Engineer – Data Modernization (GenAI) Location: Manyata Tech Park, Bangalore (Hybrid) Business & Team: CommSec is Australia's largest online retail stockbroker. It is one of the most highly visible and visited online assets in Australian financial services. CommSec’s systems utilise a variety of technologies and support a broad range of investors. Engineers within CommSec are offered regular opportunities to work on some of the finest IT systems in Australia, as well as having opportunity to develop careers across different functions and teams within the wider Bank. Impact & Contribution: Apply core concepts, technology and domain expertise to effectively develop software solutions to meet business needs. You will contribute to building the brighter future for all by ensuring that our team builds the best solutions possible using modern development practices that ensure both functional and non-functional needs are met. If you have a history of building a culture of empowerment and know what it takes to be a force multiplier within a large organization, then you’re the kind of person we are looking for. You will report to the Lead Engineer within Business Banking Technology. Roles & Responsibilities: Build scalable agentic AI solutions that integrate with existing systems and support business objectives. Implement MLOps pipelines Design and conduct experiments to evaluate model performance and iteratively refine models based on findings. Hands on experience in automated LLM outcome validation and metrication of AI outputs. Good knowledge of ethical AI practices and tools to implement. Hand-on experience in AWS cloud services such as SNS, SQS, Lambda. Experience in big data platform technologies such as to Spark framework and Vector DB. Collaborate with Software engineers to deploy AI models in production environments, ensuring robustness and scalability. Participate in research initiatives to explore new AI models and methodologies that can be applied to current and future products. Develop and implement monitoring systems to track the performance of AI models in production. Hands on DevSecOps experience including continuous integration/continuous deployment, security practices. Essential Skills: The AI Engineer will involve in the development and deployment of advanced AI and machine learning models. The ideal candidate is highly skilled in MLOps and software engineering, with a strong track record of developing AI models and deploying them in production environments. 7+ years' experience RAG, Prompt Engineering Vector DB, Dynamo DB, Redshift Spark framework, Parquet, Iceberg Python MLOps Langfuse, LlamaIndex, MLflow, Gleu, Bleu AWS cloud services such as SNS, SQS, Lambda Traditional Machine Learning Education Qualifications: Bachelor’s degree or Master's Degree in engineering in Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 06/08/2025

Posted 1 week ago

Apply

10.0 years

2 - 11 Lacs

Bengaluru

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

5.0 years

4 - 7 Lacs

Chennai

Remote

To Apply: Send your updated resume to hr@whitemastery.com with the subject line: Senior AI and ML Engineer – Application or contact (+91) 9176760030 Role Overview As a Senior AI & ML Engineer, you will be responsible for designing, developing, and deploying advanced machine learning models and AI systems. You will work alongside data scientists, backend engineers, and product teams to create robust AI-driven applications using Python , Golang , and various ML frameworks. Key Responsibilities Design and implement ML models and algorithms for real-world business problems. Build scalable backend services in Python and Golang to support AI functionalities. Deploy and optimize ML models into production environments. Lead architecture decisions and code reviews, ensuring performance and maintainability. Collaborate with cross-functional teams including data engineering, product, and DevOps. Stay current with the latest research and trends in AI/ML to propose innovative solutions. Mentor junior engineers and contribute to technical documentation and knowledge sharing. Key Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 5+ years of experience in software engineering with at least 3 years focused on ML applications. Strong proficiency in Python and experience with ML libraries (TensorFlow, PyTorch, Scikit-learn, etc.). Good experience working with Golang for backend development or system-level programming. Solid understanding of ML lifecycle – data preprocessing, feature engineering, training, evaluation, and deployment. Experience with containerization tools (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Excellent problem-solving skills and ability to translate business needs into technical solutions. Strong communication and teamwork abilities. Nice to Have Experience with MLOps tools like MLflow, Kubeflow, or Vertex AI. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Background in NLP, computer vision, or time-series analysis. Job Type: Full-time Pay: ₹400,000.00 - ₹700,000.00 per year Benefits: Flexible schedule Paid time off Work from home Application Question(s): Do you have any notice period or can you join immediately? Experience: AI and ML: 3 years (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 1 week ago

Apply

10.0 years

2 - 10 Lacs

Calcutta

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

0 years

2 - 7 Lacs

Jaipur

On-site

ID: 346 | 2-5 yrs | Jaipur | careers AI Senior Engineer About the Role: We are looking for a highly capable AI Lead Engineer to contribute to the design and delivery of intelligent, scalable AI solutions. This role focuses on building production-grade systems involving LLMs, vector databases, agent-based workflows, and Retrieval-Augmented Generation (RAG) architectures on the cloud. The ideal candidate should demonstrate strong problem-solving abilities, hands-on technical skills, and the ability to align AI design with real-world business needs. Must-Have Skills Strong coding ability in Python and proficiency in SQL. Hands-on experience with vector databases (e.g., Pinecone, FAISS, Weaviate). Practical experience with LLMs (OpenAI, Claude, Gemini, etc.) in real-world workflows. Familiarity with LangChain, LlamaIndex, or similar orchestration tools. Proven track record in delivering scalable AI systems on public cloud (Azure, AWS, or GCP). Experience building and optimizing RAG pipelines. Solid understanding of server less/cloud-native architecture and event-driven design. Integrate/expose the ai solution in applications using fastapi,flask,django Preferred or Good to Have Skills Exposure to agentic AI patterns and multi-agent coordination. Knowledge of AI system safety practices (e.g., hallucination filtering, grounding). Experience with MLOps tools (MLflow, KubeFlow) and CI/CD for ML. Understanding of concept/data drift and retraining strategies in production. Experience working on AI projects involving classification , regression , and clustering models. Prior work on multi-modal AI pipelines (vision + language). Familiarity with real-time inference tuning (batching, concurrency). Demonstrated ability to deliver a variety of AI solutions in production environments across domains. Any Other: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong analytical mindset with a clear focus on business-aligned AI delivery. Excellent verbal and written communication skills. Key Responsibilities Collaborate with the Solution Architect to design agentic AI systems (e.g., ReAct, CodeAct, Self-Reflective Agents). Build and deploy scalable RAG pipelines using vector databases and embedding models. Integrate modern AI tools (e.g., LangChain, LlamaIndex, Kagi, Search APIs) into solution workflows. Optimize inference performance for cloud and edge environments. Contribute to the development of feedback loops, drift detection, and self-healing AI systems. Deploy, monitor, and manage AI solutions on cloud platforms (Azure, AWS, or GCP). Translate business use cases into robust technical solutions in collaboration with cross-functional teams.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models Programming Skills – Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.

Posted 1 week ago

Apply

5.0 years

10 - 15 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Computer Vision Engineer to join our growing AI team. This role is ideal for someone with strong expertise in deep learning and a solid background in real-time video analytics, model deployment, and computer vision applications. You'll be responsible for developing scalable computer vision pipelines and deploying them across cloud and edge environments, helping build intelligent visual systems that solve real-world problems. Key Responsibilities: Model Development & Training: Design, train, and optimize deep learning models for object detection, segmentation, and tracking using frameworks like YOLO, UNet, Mask R-CNN, and Deep SORT. Computer Vision Applications: Build robust pipelines for computer vision applications including image classification, real-time object tracking, and video analytics using OpenCV, NumPy, and TensorFlow/PyTorch. Deployment & Optimization: Deploy trained models on Linux-based GPU systems and edge devices (e.g., Jetson Nano, Google Coral), ensuring low-latency performance and efficient hardware utilization. Real-Time Inference: Implement and optimize real-time inference systems, ensuring minimal delay in video processing pipelines. Model Management: Utilize tools like Docker, Git, and MLflow (or similar) for version control, environment management, and model lifecycle tracking. Collaboration & Documentation: Work cross-functionally with hardware, backend, and software teams. Document designs, architectures, and research findings to ensure reproducibility and scalability. Technical Expertise Required: Languages & Libraries: Advanced proficiency in Python and solid experience with OpenCV, NumPy, and other image processing libraries. Deep Learning Frameworks: Hands-on experience with TensorFlow, PyTorch, and integration with model training pipelines. Computer Vision Models: Object Detection: YOLO (all versions) Segmentation: UNet, Mask R-CNN Tracking: Deep SORT or similar Deployment Skills: Real-time video analytics implementation and optimization Experience with Docker for containerization Version control using Git Model tracking using MLflow or comparable tools Platform Experience: Proven experience in deploying models on Linux-based GPU environments and edge devices (e.g., NVIDIA Jetson family, Coral TPU). Professional & Educational Requirements: Education: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related discipline. Experience: Minimum 5 years of industry experience in AI/ML with a strong focus on computer vision and system-level design. Proven portfolio of production-level projects in image/video processing or real-time systems. Preferred Qualities: Strong problem-solving and debugging skills Excellent communication and teamwork capabilities A passion for building smart, scalable vision systems A proactive and independent approach to research and implementation

Posted 1 week ago

Apply

0.0 - 3.0 years

4 - 7 Lacs

Chennai, Tamil Nadu

Remote

To Apply: Send your updated resume to hr@whitemastery.com with the subject line: Senior AI and ML Engineer – Application or contact (+91) 9176760030 Role Overview As a Senior AI & ML Engineer, you will be responsible for designing, developing, and deploying advanced machine learning models and AI systems. You will work alongside data scientists, backend engineers, and product teams to create robust AI-driven applications using Python , Golang , and various ML frameworks. Key Responsibilities Design and implement ML models and algorithms for real-world business problems. Build scalable backend services in Python and Golang to support AI functionalities. Deploy and optimize ML models into production environments. Lead architecture decisions and code reviews, ensuring performance and maintainability. Collaborate with cross-functional teams including data engineering, product, and DevOps. Stay current with the latest research and trends in AI/ML to propose innovative solutions. Mentor junior engineers and contribute to technical documentation and knowledge sharing. Key Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 5+ years of experience in software engineering with at least 3 years focused on ML applications. Strong proficiency in Python and experience with ML libraries (TensorFlow, PyTorch, Scikit-learn, etc.). Good experience working with Golang for backend development or system-level programming. Solid understanding of ML lifecycle – data preprocessing, feature engineering, training, evaluation, and deployment. Experience with containerization tools (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Excellent problem-solving skills and ability to translate business needs into technical solutions. Strong communication and teamwork abilities. Nice to Have Experience with MLOps tools like MLflow, Kubeflow, or Vertex AI. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Background in NLP, computer vision, or time-series analysis. Job Type: Full-time Pay: ₹400,000.00 - ₹700,000.00 per year Benefits: Flexible schedule Paid time off Work from home Application Question(s): Do you have any notice period or can you join immediately? Experience: AI and ML: 3 years (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a highly skilled and proactive Team Lead – DevOps to join our Infrastructure Management Team. In this role, you will lead initiatives to streamline and automate infrastructure provisioning, CI/CD, observability, and compliance processes using GitLab, containerized environments, and modern DevSecOps tooling. You will work closely with application, data, and ML engineering teams to support MLOps workflows (e.g., model versioning, reproducibility, pipeline orchestration) and implement AIOps practices for intelligent monitoring, anomaly detection, and automated root cause analysis. Your goal will be to deliver secure, scalable, and observable infrastructure across environments. Key Responsibilities: Architect and maintain GitLab CI/CD pipelines to support deployment automation, environment provisioning, and rollback readiness. Implement standardized, reusable CI/CD templates for application, ML, and data services. Collaborate with system engineers to ensure secure, consistent infrastructure-as-code deployments using Terraform, Ansible, and Docker. Integrate security tools such as Vault, Trivy, tfsec, and InSpec into CI/CD pipelines. Govern infrastructure compliance by enforcing policies around secret management, image scanning, and drift detection. Lead internal infrastructure and security audits and maintain compliance records where required. Define and implement observability standards using OpenTelemetry, Grafana, and Graylog. Collaborate with developers to integrate structured logging, tracing, and health checks into services. Enable root cause detection workflows and performance monitoring for infrastructure and deployments. Work closely with application, data, and ML teams to support provisioning, deployment, and infra readiness. Ensure reproducibility and auditability in data/ML pipelines via tools like DVC and MLflow. Participate in release planning, deployment checks, and incident analysis from an infrastructure perspective. Mentor junior DevOps engineers and foster a culture of automation, accountability, and continuous improvement. Lead daily standups, retrospectives, and backlog grooming sessions for infrastructure-related deliverables. Drive internal documentation, runbooks, and reusable DevOps assets. Must Have: Strong experience with GitLab CI/CD, Docker, and SonarQube for pipeline automation and code quality enforcement Proficiency in scripting languages such as Bash, Python, or Shell for automation and orchestration tasks Solid understanding of Linux and Windows systems, including command-line tools, process management, and system troubleshooting Familiarity with SQL for validating database changes, debugging issues, and running schema checks Experience managing Docker-based environments, including container orchestration using Docker Compose, container lifecycle management, and secure image handling Hands-on experience supporting MLOps pipelines, including model versioning, experiment tracking (e.g., DVC, MLflow), orchestration (e.g., Airflow), and reproducible deployments for ML workloads. Hands-on knowledge of test frameworks such as PyTest, Robot Framework, REST-assured, and Selenium Experience with infrastructure testing tools like tfsec, InSpec, or custom Terraform test setups Strong exposure to API testing, load/performance testing, and reliability validation Familiarity with AIOps concepts, including structured logging, anomaly detection, and root cause analysis using observability platforms (e.g., OpenTelemetry, Prometheus, Graylog) Exposure to monitoring/logging tools like Grafana, Graylog, OpenTelemetry. Experience managing containerized environments for testing and deployment, aligned with security-first DevOps practices Ability to define CI/CD governance policies, pipeline quality checks, and operational readiness gates Excellent communication skills and proven ability to lead DevOps initiatives and interface with cross-functional stakeholders

Posted 1 week ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Department: Technology / AI Innovation Reports To: AI/ML Lead or Head of Data Science Location: Pune Role Summary: We are looking for an experienced AI/ML & Generative AI Developer to join our growing AI innovation team. You will play a critical role in building advanced machine learning models, Generative AI applications, and LLM-powered solutions. This role demands deep technical expertise, creative problem-solving, and a strong understanding of AI workflows and scalable cloud-based deployments. Key Responsibilities: Design, develop, and deploy AI/ML models and Generative AI applications for diverse enterprise use cases. Implement, fine-tune, and integrate Large Language Models (LLMs) using frameworks like LangChain , LlamaIndex , and RAG pipelines . Build Agentic AI systems with multi-step reasoning and autonomous decision-making capabilities. Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing , vector search , and advanced retrieval . Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to operationalize AI solutions. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training, serving, and monitoring AI models at scale. Required Qualifications & Skills (Mandatory): 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong knowledge of LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficient in Python and key ML/DL libraries: TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion, indexing, and retrieval pipelines for real-world enterprise scenarios. Hands-on experience with Azure cloud services and Databricks . Proven experience designing CI/CD pipelines and working with MLOps tools like MLflow, DVC, or Kubeflow. Soft Skills: Strong problem-solving and critical thinking ability. Excellent communication skills, with the ability to explain complex AI concepts to non-technical stakeholders. Strong collaboration and teamwork in agile, cross-functional environments. Growth mindset with curiosity to explore and learn emerging technologies. Preferred Qualifications: Familiarity with vector databases : FAISS, Pinecone, Weaviate. Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools. Understanding of AI security , responsible AI , and model governance . Key Relationships: Internal: Data Scientists, Data Engineers, DevOps Engineers, Product Managers, Solution Architects. External: AI/ML platform vendors, cloud service providers (Microsoft Azure), third-party data providers. How to Apply: If you meet the above qualifications and are excited about this opportunity, please send your updated resume to nivetha.s@eminds.ai Best regards, Nivetha S Senior Talent Engineer nivetha.s@eminds.ai

Posted 1 week ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com

Posted 1 week ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal

On-site

Kolkata,West Bengal,India +1 more Job ID 770160 Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Egypt, Chennai, Bengaluru

Hybrid

We're Hiring: MLOps Engineer | Cairo, Egypt | Immediate Joiners Only Share CVs to vijay.s@xebia.com Location: Cairo, Egypt Experience: 6-8 Years Mode: Onsite Joining: Immediate or Max 2 Weeks Notice Relocation: Open to relocating to Egypt ASAP Job Summary: Xebia is seeking a seasoned MLOps Engineer to scale and operationalize ML solutions for our strategic client in Cairo. This is an onsite role , perfect for professionals who are ready to deploy cutting-edge ML pipelines in real-world enterprise environments. Key Responsibilities: Design & manage end-to-end scalable, reliable ML pipelines Build CI/CD pipelines with Azure DevOps Deploy and track ML models using MLflow Work on large-scale data with Cloudera/Hadoop (Hive, Spark, HDFS) Support Knowledge Graphs , metadata enrichment, model lineage Collaborate with DS & engineering teams to ensure governance and auditability Implement model performance monitoring, drift detection, and data quality checks Support DevOps automation aligned with enterprise-grade compliance standards Required Skills: 6-8 years in MLOps / Machine Learning Engineering Hands-on with MLflow , Azure DevOps , Python Deep experience with Cloudera , Hadoop , Spark , Hive Exposure to Knowledge Graphs , containerization (Docker/Kubernetes) Familiar with TensorFlow , scikit-learn , or PyTorch Understanding of data security, access controls, audit logging Preferred: Azure Certifications (e.g., Azure Data Engineer / AI Engineer Associate ) Experience with Apache NiFi , Airflow , or similar tools Background in regulated sectors like BFSI, Healthcare, or Pharma Soft Skills: Strong problem-solving & analytical thinking Clear communication & stakeholder engagement Passion for automation & continuous improvement Additional Information: Only apply if: You can join within 2 weeks or are an immediate joiner You're open to relocating to Cairo, Egypt ASAP You hold a valid passport Visa-on-arrival/B1/Schengen holders from MEA region preferred To Apply: Send your updated CV to vijay.s@xebia.com along with: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Cairo) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Valid Passport No Be part of a global transformation journey make AI work at scale! #MLOps #Hiring #AzureDevOps #MLflow #CairoJobs #ImmediateJoiners #DataEngineering #Cloudera #Hadoop #XebiaCareers

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies