Jobs
Interviews

1795 Mlflow Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 - 0 Lacs

Vaishali Nagar, Jaipur, Rajasthan

On-site

Job Title: AI Developer Company: Eoxys IT Solution Location: Jaipur, Rajasthan Experience: 1–2 Years Employment Type: Full-Time Education Qualification: BCA / MCA / B.Tech in Computer Science, IT, or related field Key Skills Required: Strong programming skills in Python Hands-on experience with TensorFlow , PyTorch , Keras Experience building and deploying end-to-end ML pipelines Solid understanding of model evaluation , cross-validation , and hyperparameter tuning Familiarity with cloud platforms such as AWS, Azure, or GCP for AI/ML workloads Knowledge of MLOps tools like MLflow, DVC, or Apache Airflow Exposure to domains like Natural Language Processing (NLP) , Computer Vision , or Reinforcement Learning Roles & Responsibilities: Develop, train, and deploy machine learning models for real-world applications Implement scalable ML solutions using cloud platforms Collaborate with cross-functional teams to integrate AI capabilities into products Monitor model performance and conduct regular improvements Maintain version control and reproducibility using MLOps practices Additional Requirements: Strong analytical and problem-solving skills Passion for learning and implementing cutting-edge AI/ML technologies Good communication and teamwork skills Salary: Based on experience and skillset Apply Now to be a part of our innovative AI journey! Job Type: Full-time Pay: ₹15,000.00 - ₹40,000.00 per month Work Location: In person

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

About the job What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a detail-oriented and curious AI QA Engineer to join our growing QA team. You will play a critical role in ensuring the quality, safety, and reliability of our AI-powered products and features. If you're passionate about AI, testing complex systems, and driving high standards of quality—this role is for you! Primary Skills: QA Automation, Python, API Testing, AI/ML Testing, Prompt Evaluation, Adversarial Testing, Risk-Based Testing, LLM-as-a-Judge, Model Metrics Validation, Test Strategy. Secondary Skills: CI/CD Integration, Git, Cloud Platforms (AWS/GCP/Azure ML), MLFlow, Postman, Testim, Applitools, Collaboration Tools (Jira, Confluence), Synthetic Data Generation, AI Ethics & Bias Awareness. Experience: 3 - 8 Years Work Location: Coimbatore/ Remote Must-Have Skills: Foundational QA Skills Strong knowledge of test design, defect management, and QA lifecycle . Experience with risk-based testing and QA strategy. AI/ML Knowledge Basic understanding of machine learning workflows , training/inference cycles. Awareness of AI quality challenges : bias, fairness, transparency. Familiarity with AI evaluation metrics: accuracy, precision, recall, F1-score . Hands-on with prompt testing , synthetic data generation , and non-deterministic behavior validation. Technical Capabilities Python programming for test automation and data validation. Hands-on experience with API testing tools (Postman, Swagger, REST clients). Knowledge of test automation tools (e.g., PyTest , Playwright, Selenium). Familiarity with Git and version control best practices. Understanding of CI/CD pipelines and integration testing. Tooling (Preferred) Tools like Diffblue, Testim, Applitools, Kolena, Galileo, MLFlow, Weights & Biases . Basic understanding of cloud-based AI platforms (AWS Sagemaker, Azure ML, GCP Vertex AI). Soft Skills Excellent analytical thinking and attention to detail. Strong collaboration and communication skills to work across cross-functional teams. Proactive and pull-mode work ethic —self-starter who takes ownership. Passion for learning new technologies and contributing to AI quality practices. Roles & Responsibilities: Design, write, and execute test plans and test cases for AI/ML-based applications. Collaborate with data scientists, ML engineers, and developers to understand model behavior and expected outcomes. Perform functional, regression, and exploratory testing on AI components and APIs. Validate model outputs for accuracy, fairness, bias, and explainability . Implement and run adversarial testing , edge cases, and out-of-distribution data scenarios. Conduct prompt testing and evaluation for LLM (Large Language Model)-based applications. Use LLM-as-a-Judge and AI tools to automate evaluation of AI responses where possible. Validate data pipelines , datasets, and ETL workflows. Track model performance metrics such as precision, recall, F1-score , and flag potential degradation. Document defects, inconsistencies, and raise risks proactively with the team. What we offer: Best in packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Medical Insurance covering self & family up to 4 lakhs per person. Work in an engaging, fast-paced environment with ample opportunities for professional development. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.

Posted 3 days ago

Apply

4.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Biz2X Biz2X is the leading digital lending platform, enabling financial providers to power growth with a modern omni-channel experience, best-in-class risk management tools and a comprehensive yet flexible Servicing engine. The company partners with financial institutions to support their Digital Transformation efforts with Biz2X’s digital lending platform. Biz2X solutions not only reduces operational expenses, but accelerates lending growth by significantly improving client experience, reducing total turnaround time, and equipping the relationship managers with powerful monitoring insights and alerts Read Our Latest Press Release : Press Release - Biz 2X Job Overvi ew: We are seeking for a Senior Engineer – AI/ML to drive the development and deployment of sophisticated AI solutions in our fintech products. You will oversee MLOps pipelines and manage large language models (LLMs) to enhance our financial technology services. Key Responsibilities: AI/ML Development: Design and implement advanced ML models for applications including fraud detection, credit scoring, and algorithmic trading. MLOps: Develop and manage MLOps pipelines using tools such as MLflow, Kubeflow, and Airflow for CI/CD, model monitoring, and automation. LLMOps: Optimize and operationalize LLMs (e.g., GPT-4, BERT) for fintech applications like automated customer support and sentiment analysis. Collaboration: Work with product managers, data engineers, and business analysts to align technical solutions with business objectives. Qualifications: Experience: 4-6 years in AI, ML, MLOps, and LLMOps with a focus on fintech. Technical Skills: Expertise in TensorFlow, PyTorch, scikit-learn, and MLOps tools (MLflow, Kubeflow). Proficiency in large language models (LLMs) and cloud platforms (AWS, GCP, Azure). Strong programming skills in Python, Java, or Scala. Experience in building RAG pipelines, NLP, OCR and Pyspark Good to have: Production GenAI Experience in Fintech/Lending domain

Posted 4 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.

Posted 4 days ago

Apply

10.0 years

15 - 20 Lacs

Jaipur, Rajasthan, India

On-site

We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑‍💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior MLOps Engineer-2 Horizontal Data Science Enablement Team within SSO Data Science is looking for a Senior MLOps Engineer who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. All About You Assist in the administration, configuration, and maintenance of Databricks clusters and workspaces. Monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Assist in development of MLOps solutions, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Maintain services once they are live by measuring and monitoring availability, latency and overall system health. What Experience You Need Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 2+ DevOps, SRE, or general systems engineering experience. 2+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills What could set you apart SQL tuning experience. Strong automation experience Ability to operate in a 24x7 environment encompassing global time zones Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary MLOps Engineering manager Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Manager who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. All About You Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. What Experience You Need Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 5+ DevOps, SRE, or general systems engineering experience. 2+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What could set you apart SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Systems Engineer specializing in Data DevOps/MLOps, you will play a crucial role in our team by leveraging your expertise in data engineering, automation for data pipelines, and operationalizing machine learning models. This position requires a collaborative professional who can design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment. You will be responsible for building and maintaining infrastructure for data processing and model training using cloud-native tools and services. Your role will involve automating processes for data validation, transformation, and workflow orchestration, ensuring seamless integration of ML models into production. You will work closely with data scientists, software engineers, and product teams to optimize performance and reliability of model serving and monitoring solutions. Managing data versioning, lineage tracking, and reproducibility for ML experiments will be part of your responsibilities. You will also identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience. Implementing security measures to safeguard data integrity and ensure regulatory compliance will be crucial, along with diagnosing and resolving issues throughout the data and ML pipeline lifecycle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field, along with 4+ years of experience in Data DevOps, MLOps, or similar roles. Proficiency in cloud platforms like Azure, AWS, or GCP is required, as well as competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Expertise in containerization and orchestration technologies like Docker and Kubernetes is essential, along with a background in data processing frameworks such as Apache Spark or Databricks. Skills in Python programming, including proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch, are necessary. Familiarity with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions, as well as understanding version control tools like Git and MLOps platforms such as MLflow or Kubeflow, will be valuable. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana), strong problem-solving skills, and the ability to contribute independently and within a team are also required. Excellent communication skills and attention to documentation are essential for success in this role. Nice-to-have qualifications include knowledge of DataOps practices and tools like Airflow or dbt, an understanding of data governance concepts and platforms like Collibra, and a background in Big Data technologies like Hadoop or Hive. Qualifications in cloud platforms or data engineering would be an added advantage.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,

Posted 4 days ago

Apply

0.0 - 3.0 years

12 - 24 Lacs

Chennai, Tamil Nadu

On-site

We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI DEVELOPER LLM SPECIALIST Full-time. Company Description RMgX is a Gurgaon based digital product innovation & consulting firm. Here at RMgX, we design and build elegant, data-driven digital solutions for complex business problems. At the core of solutions crafted by us is a very strong user experience practice to deeply understand the goals and emotions of business and end-users. RMgX is driven by a passion for quality and we strongly believe in our people and their capabilities. Duties And Responsibilities Design, develop, and deploy AI solutions using Large Language Models (LLMs) such as GPT, LLaMA, Claude, or Mistral. Fine-tune and customize pre-trained LLMs for business-specific use cases. Build and maintain NLP pipelines for classification, summarization, semantic search, etc. Build and maintain vector database pipelines using Milvus, Pinecone, etc. Collaborate with cross-functional teams to integrate LLM-based features into applications. Analyze and improve model performance using appropriate metrics. Stay up-to-date with AI/ML research and integrate new techniques as appropriate. Work Experience 12 years of experience in AI/ML development with specific focus on NLP and LLM-based applications. Skills, Abilities & Knowledge Strong hands-on experience in Python and AI/ML libraries (HuggingFace Transformers, LangChain, PyTorch, TensorFlow, etc. Proficiency in working with closed-source models via APIs (e.g , OpenAI, Gemini). Understanding of prompt engineering, embeddings, and vector databases like FAISS, Milvus or Pinecone. Experience in deploying models using REST APIs, Docker, and cloud platforms (AWS/GCP/Azure). Familiarity with MLOps and version control tools (Git, MLflow, etc. Knowledge of LLMOps platforms such as LangSmith, Weights & Biases is a plus. Strong problem-solving skills, a keen eye for detail, and ability to work in an agile setup. Qualifications Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Additional Information Perks And Benefits Flexible working hours. Saturday and Sundays are fixed off. Health Insurance and Personal Accident Insurance. BYOD (Bring Your Own Device) Benefit. Laptop Buyback Scheme. (ref:hirist.tech)

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

punjab

On-site

As a Senior AI/ML Engineer/Lead at our company located in Mohali, you will play a crucial role in leading the design, architecture, and delivery of complex AI/ML solutions. With over 8 years of experience, you will be responsible for developing and implementing machine learning models across various domains like NLP, computer vision, recommendation systems, classification, and regression. Your expertise will be utilized to integrate Large Language Models (LLMs) such as GPT, BERT, LLaMA, and other state-of-the-art transformer architectures into our enterprise-grade applications to drive innovation. Your key responsibilities will include selecting and implementing ML frameworks, tools, and cloud technologies that align with our business goals. You will lead AI/ML experimentation, PoCs, benchmarking, and model optimization initiatives while collaborating with data engineering, software development, and product teams to integrate ML capabilities seamlessly into production systems. Additionally, you will establish and enforce robust MLOps pipelines covering CI/CD for ML, model versioning, reproducibility, and monitoring to ensure reliability at scale. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field. Your hands-on experience in the AI/ML domain, along with a proven track record of delivering production-grade ML systems, will be highly valued. Proficiency in machine learning algorithms, deep learning architectures, and advanced neural network design is essential. You should also possess expertise in LLMs, transformer-based models, prompt engineering, and embeddings, with strong programming skills in Python and familiarity with cloud platforms for scalable ML workloads. Having experience with MLOps tools and practices, such as MLflow, Kubeflow, SageMaker, or equivalent, will be beneficial. Furthermore, exposure to vector databases, real-time AI systems, edge AI, streaming data environments, and active contributions to open-source projects, research publications, or thought leadership in AI/ML will be advantageous. A certification in AI/ML would be a significant plus. If you are a self-driven individual with excellent leadership, analytical thinking, and communication skills, and if you are passionate about staying updated on AI advancements and championing their adoption in business use, we encourage you to apply for this exciting opportunity.,

Posted 4 days ago

Apply

2.0 - 13.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly motivated and technically strong Data Scientist / MLOps Engineer with 13 years of experience, looking to join our growing AI & ML team in Kolkata. Your role will involve designing, developing, and deploying scalable machine learning solutions with a focus on operational excellence, data engineering, and GenAI integration. Your key responsibilities will include building and maintaining scalable machine learning pipelines using Python, deploying and monitoring models using MLFlow and MLOps stacks, designing and implementing data workflows with PySpark, and leveraging standard data science libraries for model development. You will also work with GenAI technologies such as Azure OpenAI and collaborate with cross-functional teams to meet business objectives. To excel in this role, you must have expertise in Python for data science and backend development, solid experience with PostgreSQL and MSSQL databases, hands-on experience with data science packages like Scikit-Learn, Pandas, Numpy, and Matplotlib, as well as experience with Databricks, MLFlow, and Azure. A strong understanding of MLOps frameworks and deployment automation is essential, along with prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI. Preferred qualifications include experience in the Finance, Legal, or Regulatory domain, working knowledge of clustering algorithms and forecasting techniques, and previous experience in developing reusable AI frameworks or productized ML solutions. You should hold a B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. By joining us, you will work on cutting-edge ML and GenAI projects, be part of a collaborative and forward-thinking team, and have opportunities for rapid growth and technical leadership. This is a full-time position based in Kolkata, with benefits including leave encashment, paid sick time, paid time off, Provident Fund, and work from home options. If you meet the required qualifications and are excited about the prospect of working in a dynamic AI & ML environment, we encourage you to apply before the application deadline of 02/08/2025. The expected start date for this position is 04/08/2025.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

You will be joining a technology-driven publishing and legal-tech organization that is currently expanding its capabilities through advanced AI initiatives. With a strong internal team proficient in .NET, MERN, MEAN stacks, and recent chatbot development using OpenAI and LLaMA, we are seeking an experienced AI Solution Architect to lead the design, development, and deployment of transformative AI projects. As an AI Solution Architect, you will play a critical role in designing and delivering enterprise-grade AI solutions. Your responsibilities will include architecting, planning, and implementing AI solutions using Vector Databases, RAG Pipelines, and LLM techniques. You will lead AI solution design discussions, collaborate with engineering teams to integrate AI modules, evaluate AI frameworks, and mentor junior AI engineers and developers. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, or related fields, along with at least 5 years of experience in software architecture with a focus on AI/ML system design and project delivery. Hands-on expertise in Vector Search Engines, LLM Tuning & Deployment, RAG systems, and RLHF techniques is essential. You should also have experience working with multi-tech teams, strong command over architecture principles, and excellent team leadership skills. Experience with tools like MLflow, Weights & Biases, Docker, Kubernetes, and Azure/AWS AI services would be a plus. Exposure to data pipelines, MLOps, and AI governance, as well as familiarity with enterprise software lifecycle and DevSecOps practices, are also desirable. If you are passionate about AI technology, have a proven track record of delivering AI projects, and enjoy leading and mentoring technical teams, we encourage you to apply for this exciting opportunity to drive AI adoption across products and internal processes.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

Working at Freudenberg: "We will wow your world!" This is our promise. As a global technology group, we not only make the world cleaner, healthier, and more comfortable but also offer our 52,000 employees a networked and diverse environment where everyone can thrive individually. Be surprised and experience your own wow moments. Klber Lubrication, a company of the Freudenberg Group, is the global leader in specialty lubrication with manufacturing operations in North and South America, Europe, and Asia, subsidiaries in more than 30 different countries, and distribution partners in all regions of the world, supported by our HQs in Germany. We are passionate about innovative tribological solutions that help our customers to be successful. We supply products and services, many of them customized, in almost all industries from automotive to the wind energy markets. Some of your Benefits Diversity & Inclusion: We focus on providing an inclusive environment and recognize our diversity contributes to our success. Health Insurance: Rely on comprehensive services whenever you need it. Personal Development: We offer a variety of trainings to ensure you can develop in your career. Safe Environment: We strive to ensure safety remains a top priority and provide a stable environment for our employees. Sustainability & Social Commitment: We support social and sustainable projects and encourage employee involvement. Bangalore On-Site Klber Lubrication India Pvt. Ltd. You support our team as Statistical Analytics Engineer (F/M/D) Responsibilities Performs projects in statistics, data science, and artificial intelligence in the technical field of KL under the guidance of experts, evaluates, and comments on results. Implements data processes and data products into operations and works on their operation and optimization. Coordinates regularly with experts in statistics, data science, and artificial intelligence at KL. Maintains a balance between effort (time, cost) and expected benefit of incoming requests. Contributes ideas for KL-relevant developments and methods, especially in the field of data analysis and artificial intelligence, and implements them as needed. Supports the improvement and automation of processes/workflows in own & adjacent work areas through statistical methods and evaluations. Works on cross-functional project teams and projects with external partners. Keeps own expertise up to date (self-study, external training, meetings, specialist groups). Supports technical colleagues in the introduction, use, and quality assurance of data analysis tools. Presents contributions to statistics and data science in the context of Klber internal and customer training. Qualifications Engineering Graduate /Masters in Chemistry, Physics, or equivalent degree. Overall 7-10 years of experience in data modeling, analysis, and visualization in manufacturing or industrial environment. Completed higher education in science, focus on chemistry, physics, mathematics, or related discipline. Strong experience in methods for processing, evaluating, and modeling data. In-depth programming knowledge with a focus on data science and data engineering, preferably in Python. Experience using cheminformatics, materials informatics, generative AI, and machine learning. Experience in developing interactive data applications with frameworks like Shiny, Dash, or streamlit. Good knowledge of relational data (SQL), experience in tools like JupyterLab, Gitlab, Airflow, MLFlow, knowledge of chemical analysis and organic chemistry.,

Posted 4 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

All Jobs Lead Software Engineer (Python, GenAI, LLM) at Epam India Lead Software Engineer (Python, GenAI, LLM) Apply Posted on June 18, 2024 | Closed on July 16, 2025 Epam India Hyderabad, Bangalore Full TIme Job Description We are seeking a skilled Lead Software Engineer to join our team and lead a project focused on developing GenAI applications using Large Language Models (LLMs) and Python programming. In this role, you will be responsible for designing and optimizing Al-generated text prompts to maximize effectiveness for various applications. You will also collaborate with cross-functional teams to ensure seamless integration of optimized prompts into the overall product or system. Your expertise in prompt engineering principles and techniques will allow you to guide models to desired outcomes and evaluate prompt performance to identify areas for optimization and iteration. Responsibilities Design, develop, test and refine AI-generated text prompts to maximize effectiveness for various applications Ensure seamless integration of optimized prompts into the overall product or system Rigorously evaluate prompt performance using metrics and user feedback Collaborate with cross-functional teams to understand requirements and ensure prompts align with business goals and user needs Document prompt engineering processes and outcomes, educate teams on prompt best practices and keep updated on the latest AI advancements to bring innovative solutions to the project Requirements 7 to 12 years of relevant professional experience Expertise in Python programming including experience with Al/machine learning frameworks like TensorFlow, PyTorch, Keras, Langchain, MLflow, Promtflow 2-5 years of working knowledge of NLP and LLMs like BERT, GPT-3/4, T5, etc. Knowledge of how these models work and how to fine-tune them Expertise in prompt engineering principles and techniques like chain of thought, in-context learning, tree of thought, etc. Knowledge of retrieval augmented generation (RAG) Strong analytical and problem-solving skills with the ability to think critically and troubleshoot issues Excellent communication skills, both verbal and written in English at a B2+ level for collaborating across teams, explaining technical concepts, and documenting work outcomes

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description About Mirantis Mirantis is the Kubernetes-native AI infrastructure company, enabling organizations to build and operate scalable, secure, and sovereign infrastructure for modern AI, machine learning, and data-intensive applications. By combining open source innovation with deep expertise in Kubernetes orchestration, Mirantis empowers platform engineering teams to deliver composable, production-ready developer platforms across any environment—on-premises, in the cloud, at the edge, or in sovereign data centers. As enterprises navigate the growing complexity of AI-driven workloads, Mirantis delivers the automation, GPU orchestration, and policy-driven control needed to manage infrastructure with confidence and agility. Committed to open standards and freedom from lock-in, Mirantis ensures that customers retain full control of their infrastructure strategy. Job Description Mirantis is adding a Pre-Sales Solution Architect to our team. As part of our client-facing technical team, you will leverage your technical and consultative expertise to guide clients from their current state to strategic solutions that deliver measurable, real-world business outcomes. You will work in lockstep with the sales team to communicate Mirantis’ vision, features, and value, and with the services team to ensure that the vision and outcomes are delivered to the client. You will also work with product management, marketing, and engineering teams to understand our product and offerings, and to provide feedback which will define enhancements to our solutions. The Pre-Sales Solution Architect position requires technical thought leadership and offers candidates opportunities for professional growth across a variety of business and technology domains. You will play a key role as an individual leader in the field and contribute internally to an organizational culture that deeply values transparency, performance, development, creativity, collaboration, and trust. Responsibilities Act as a trusted technical advisor and client advocate. Build long-term, value-driven relationships with stakeholders. Understand client goals and architect tailored solutions using Mirantis products and open-source technologies to drive business outcomes. Drive technical wins during sales engagements by demonstrating solution fit, feasibility, and alignment with strategic objectives. Ensure seamless transitions from pre-sales to post-sales with ongoing technical guidance for successful delivery and adoption. Collaborate cross-functionally across Sales, Product, Services, and CTO teams to align on customer qualification, solutioning, and execution. Develop and present custom demos, prototypes, and reference architectures that align with customer use cases. Provide structured field insights and client feedback to support product and services strategy. Stay current with emerging technologies (e.g., Kubernetes, AI/ML) and serve as a knowledge-sharing resource internally and externally. Identify and nurture account and partner growth opportunities, aligning solutions to customer and partner visions. Contribute to strategic account planning and customer success roadmaps to support long-term engagements. Qualifications Required Skills/Abilities Proven experience engaging with senior IT and engineering leadership, including CIOs and CTOs, and aligning technical solutions with strategic priorities. Demonstrated thought leadership through client engagement, public speaking (e.g., conferences, webinars), or community contributions. Strong understanding of the competitive landscape within the cloud infrastructure and open-source ecosystem. Practical knowledge of distributed systems, modern application architectures, software development practices, and DevOps methodologies. Awareness of industry trends, compliance requirements, and regulatory environments, with the ability to assess their impact on client needs. Familiarity with AI/ML infrastructure tools (e.g., Kubeflow, MLflow, NVIDIA AI), and how they integrate with cloud-native platforms. Demonstrated hands-on experience in the following areas: Open-source software stacks Cloud infrastructure including Kubernetes, OpenStack, Docker, microservices, and public cloud services (AWS, GCP, Azure) DevOps and automation tools, including CI/CD pipelines Observability tooling, including open-source monitoring, logging, and alerting systems Experience working effectively in a globally distributed team environment. Qualifications 5+ years of experience in a client-facing technical role such as Sales Engineer, Consultant, or Solutions Architect Four year college degree preferred Excellent written and verbal communication skills, including public speaking and demonstrations Technical certifications (optionally), such as Certified Kubernetes Administrator (CKA), AWS Certified Solutions Architect, or similar industry-recognized credentials. Ability to travel up to 50% Additional Information Why you’ll love Mirantis Work with an established leader in the cloud infrastructure industry. Work with exceptionally passionate, talented and engaging colleagues, helping Fortune 500 and Global 2000 customers implement next-generation cloud technologies. Be a part of cutting-edge, open-source innovation. Thrive in the high-energy environment of a young company where openness, collaboration, risk-taking, and continuous growth are valued. Receive a competitive compensation package with strong benefits plan. We are a Leader for Container Management in G2 (#2 after AWS)! It is understood that Mirantis, Inc. may use automated decision-making technology (ADMT) for specific employment-related decisions. Opting out of ADMT use is requested for decisions about evaluation and review connected with the specific employment decision for the position applied for. You also have the right to appeal any decisions made by ADMT by sending your request to isamoylova@mirantis.com By submitting your resume, you consent to the processing and storage of your personal data in accordance with applicable data protection laws, for the purposes of considering your application for current and future job opportunities. We are a Leader for Container Management in G2 (#2 after AWS)! We are a Leader for Container Management in G2 (#2 after AWS)!

Posted 4 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities About JSM Team Jira Service Management is one of the marquee products of Atlassian. Through this solution, we are helping technical and non-technical teams centralize and streamline service requests, respond to incidents, collect and maintain knowledge, manage assets and configuration items, and more. Specifically this team within JSM works on using assitive AI to automate IT operational tasks, troubleshoot problems, and reduce mental overload for oncall engineers and alike. By weaving these capabilities into our product we revolutionise AIOps by moving from a traditional reactive troubleshooting-based system to a proactive problem-solving approach What You Will Do As a Principal Engineer on the JSM team, you will get the opportunity to work on cutting-edge AI and ML algorithms that help modernize IT Operations by reducing MTTR (mean time to resolve), and MTTI (Mean time to identify). You will use your software development expertise to solve difficult problems, tackling complex infrastructure and architecture challenges. In This Role, You'll Get The Chance To Shape the future of AIOps: Be at the forefront of innovation, shaping the next generation of AI-powered operations tools that predict, prevent, and resolve IT issues before they impact our customers Master Generative AI: Delve into the world of generative models, exploring their potential to detect anomalies, automate responses, and personalize remediation plans Become a machine learning maestro: Hone your skills in both supervised and unsupervised learning, building algorithms that analyze mountains of data to uncover hidden patterns and optimize system performance Collaborate with diverse minds: Partner with a brilliant team of engineers, data scientists, and researchers, cross-pollinating ideas and learning from each other's expertise Make a tangible impact: Your work will directly influence the reliability and performance of Atlassian's critical software, driving customer satisfaction and propelling our business forward. Routinely tackle complex architectural challenges, spar with principal engineers to build ML pipelines and models that scale for thousands of customers Lead code reviews & documentation as well as take on complex bug fixes, especially on high-risk problems Our tech stack is primarily Python/Java/Kotlin built on AWS. On your first day, we’ll expect you to have Fluency in Python Solid understanding of machine learning concepts and algorithms, including supervised and unsupervised learning, deep learning, and NLP. Familiarity with popular ML libraries like sci-kit-learn, Keras/TensorFlow/PyTorch, numpy, pandas Good Understanding of Machine Learning project lifecycle Experience in architecting and implementing high-performance RESTful microservices ( API development for ML Models ) Familiarity with MLOps and experience with scaling and deploying Machine Learning models It would be great, but not required if you have Experience with cloud-based machine learning platforms (e.g., AWS SageMaker, Azure ML Service, Databricks). Experience with MLOps tools ( MLflow, Tecton, Pinecone, Feature Stores ) Experience with AIOps or related fields like IT automation or incident management. Experience building and operating large-scale distributed systems using Amazon Web Services (S3, Kinesis, Cloud Formation, EKS, AWS Security and Networking). Experience with using OpenAI LLMs. Qualifications Compensation Skills At Atlassian, we strive to design equitable, explainable, and competitive compensation programs. To support this goal, the baseline of our range is higher than that of the typical market range, but in turn we expect to hire most candidates near this baseline. Base pay within the range is ultimately determined by a candidate's skills, expertise, or experience. In the United States, we have three geographic pay zones. For this role, our current base pay ranges for new hires in each zone are: Zone A: $199,400 - $265,800 Zone B: $179,400 - $239,200 Zone C: $165,500 - $220,600 This role may also be eligible for benefits, bonuses, commissions, and equity. Please visit go.atlassian.com/payzones for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter. Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 4 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're on the lookout for an experienced MLOps Engineer to support our growing AI/ML initiatives, including GenAI platforms, agentic AI systems, and large-scale model deployments. Experience - 7+ years Location - Pune Notice Period - Short Joiners Primary Skills - Cloud -google cloud preferred but cloud any will do ML deployment using Jenkins/Harness Python Kubernetes Terraform Key Responsibilities Build and manage CI/CD pipelines for ML model training, RAG systems, and LLM workflows Optimize GPU-powered Kubernetes environments for distributed compute Manage cloud-native infrastructure across AWS or GCP using Terraform Deploy vector databases, feature stores, and observability tools Ensure security, scalability, and high availability of AI workloads Collaborate cross-functionally with AI/ML engineers and data scientists Enable agentic AI workflows using tools like LangChain, LangGraph, CrewAI, etc. What We’re Looking For 4+ years in DevOps/MLOps/Infra Engineering, including 2+ years in AI/ML setups Hands-on with AWS SageMaker, GCP Vertex AI, or Azure ML Proficient in Python, Bash, and CI/CD tools (GitHub Actions, ArgoCD, Jenkins) Deep Kubernetes expertise and experience managing GPU infra Strong grasp on monitoring, logging, and secure deployment practices 💡 Bonus Points For 🔸 Familiarity with MLflow, Kubeflow, or similar 🔸 Experience with RAG, prompt engineering, or model fine-tuning 🔸 Knowledge of model drift detection and rollback strategies Ready to take your MLOps career to the next level? Apply now or email me on amruta.bu@peoplefy.com for more details

Posted 4 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

This role is for one of Weekday's clients Min Experience: 7 years Location: Gurugram JobType: full-time Requirements We are looking for an experienced Data Engineer with deep expertise in Azure and/or AWS Databricks to join our growing data engineering team. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, enabling seamless data integration and real-time analytics. This role is ideal for professionals who have hands-on experience with cloud-based data platforms, big data processing frameworks, and a strong understanding of data modeling, pipeline orchestration, and performance tuning. You will work closely with data scientists, analysts, and business stakeholders to deliver scalable and reliable data infrastructure that supports high-impact decision-making across the organization. Key Responsibilities: Design and Development of Data Pipelines: Design, develop, and maintain scalable and efficient data pipelines using Databricks on Azure or AWS. Integrate data from multiple sources including structured, semi-structured, and unstructured datasets. Implement ETL/ELT pipelines for both batch and real-time processing. Cloud Data Platform Expertise: Use Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, or similar services to build robust and secure data workflows. Optimize storage, compute, and processing costs using appropriate services within the cloud environment. Data Modeling & Governance: Build and maintain enterprise-grade data models, schemas, and lakehouse architecture. Ensure adherence to data governance, security, and privacy standards, including data lineage and cataloging. Performance Tuning & Monitoring: Optimize data pipelines and query performance through partitioning, caching, indexing, and memory management. Implement monitoring tools and alerts to proactively address pipeline failures or performance degradation. Collaboration & Documentation: Work closely with data analysts, BI developers, and data scientists to understand data requirements. Document all processes, pipelines, and data flows for transparency, maintainability, and knowledge sharing. Automation and CI/CD: Develop and maintain CI/CD pipelines for automated deployment of data pipelines and infrastructure using tools like GitHub Actions, Azure DevOps, or Jenkins. Implement data quality checks and unit tests as part of the data development lifecycle. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering roles with hands-on experience in Azure or AWS ecosystems. Strong expertise in Databricks (including notebooks, Delta Lake, and MLflow integration). Proficiency in Python and SQL; experience with PySpark or Spark strongly preferred. Experience with data lake architectures, data warehouse platforms (like Snowflake, Redshift, Synapse), and lakehouse principles. Familiarity with infrastructure as code (Terraform, ARM templates) is a plus. Strong analytical and problem-solving skills with attention to detail. Excellent verbal and written communication skills.

Posted 4 days ago

Apply

40.0 years

6 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Let’s do this. Let’s change the world. In this vital role you will as a Senior Associate IS Business Systems Analyst with strong data science and analytics expertise to join the Digital Workplace Experience (DWX) Automation & Analytics product team. In this role, you will develop, maintain, and optimize machine learning models, forecasting tools, and operational dashboards that support strategic and day-to-day decisions for global digital workplace services. This role is ideal for candidates with hands-on experience building predictive models and working with large operational datasets to uncover insights and deliver automation solutions. You will work alongside product owners, engineers, and service leads to deliver measurable business value using data-driven tools and techniques. Roles and Responsibilities Design, develop, and maintain predictive models, decision support tools, and dashboards using Python, R, SQL, Power BI, or similar platforms. Partner with delivery teams to embed data science outputs into business operations, focusing on improving efficiency, reliability, and end-user experience in Digital Workplace services. Build and automate data pipelines for data ingestion, cleansing, transformation, and model training using structured and unstructured datasets. Monitor, maintain, and tune models to ensure accuracy, interpretability, and sustained business impact. Support efforts to operationalize ML models by working with data engineers and platform teams on integration and automation. Conduct data exploration, hypothesis testing, and statistical analysis to identify optimization opportunities across services like endpoint health, service desk operations, mobile technology, and collaboration platforms. Provide ad hoc and recurring data-driven recommendations to improve automation performance, service delivery, and capacity forecasting. Develop reusable components, templates, and frameworks that support analytics and automation scalability across DWX. Collaborate with other data scientists, analysts, and developers to implement best practices in model development and lifecycle management. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The vital attribute professional we seek is with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years in Data Science, Computer Science, IT, or related field Must Have Skill Experience working with large-scale datasets in enterprise environments and with data visualization tools such as Power BI, Tableau, or equivalent Strong experience developing models in Python or R for regression, classification, clustering, forecasting, or anomaly detection Proficiency in SQL and working with relational and non-relational data sources Nice-to-Have Skills Familiarity with ML pipelines, version control (e.g., Git), and model lifecycle tools (MLflow, SageMaker, etc.) Understanding of statistics, data quality, and evaluation metrics for applied machine learning Ability to translate operational questions into structured analysis and model design Experience with cloud platforms (Azure, AWS, GCP) and tools like Databricks, Snowflake, or BigQuery Familiarity with automation tools or scripting (e.g., PowerShell, Bash, Airflow) Working knowledge of Agile/SAFe environments Exposure to ITIL practices or ITSM platforms such as ServiceNow Soft Skills Analytical mindset with attention to detail and data integrity Strong problem-solving and critical thinking skills Ability to work independently and drive tasks to completion Strong collaboration and teamwork skills Adaptability in a fast-paced, evolving environment Clear and concise documentation habits EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies