Home
Jobs

275 Drift Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Job Summary We are seeking a Senior IAC Engineer to architect, develop, and automate D&A GCP PAAS services and Databricks platform provisioning using Terraform, Spacelift, and GitHub. This role combines the depth of platform engineering with the principles of reliability engineering, enabling resilient, secure, and scalable cloud environments. The ideal candidate has 6+ years of hands-on experience with IaC , CI/CD, infrastructure automation, and driving cloud infrastructure reliability. Key Responsibilities Infrastructure & Automation Design, implement, and manage modular, reusable Terraform modules to provision GCP resources (BigQuery, GCS, VPC, IAM, Pub/Sub, Composer, etc.). Automate provisioning of Databricks workspaces, clusters, jobs, service principals, and permissions using Terraform. Build and maintain CI/CD pipelines for infrastructure deployment and compliance using GitHub Actions and Spacelift. Standardize and enforce GitOps workflows for infrastructure changes, including code reviews and testing. Integrate infrastructure cost control, policy-as-code, and secrets management into automation pipelines. Architecture & Reliability Lead the design of scalable and highly reliable infrastructure patterns across GCP and Databricks. Implement resiliency and fault-tolerant designs, backup/recovery mechanisms, and automated alerting around infrastructure components. Partner with SRE and DevOps teams to enable observability, performance monitoring, and automated incident response tooling. Develop proactive monitoring and drift detection for Terraform-managed resources. Contribute to reliability reviews, runbooks, and disaster recovery strategies for cloud resources. Collaboration & Governance Work closely with security, networking, FinOps, and platform teams to ensure compliance, cost-efficiency, and best practices. Define Terraform standards, module registries, and access patterns for scalable infrastructure usage. Provide mentorship, peer code reviews, and knowledge sharing across engineering teams. Required Skills & Experience 6+ years of experience with Terraform and Infrastructure as Code (IaC), with deep expertise in GCP provisioning. Experience in automating Databricks (clusters, jobs, users, ACLs) using Terraform. Strong hands-on experience with Spacelift (or similar tools like Terraform Cloud or Atlantis) and GitHub CI/CD workflows. Deep understanding of infrastructure reliability principles: HA, fault tolerance, rollback strategies, and zero-downtime deployments. Familiar with monitoring/logging frameworks (Cloud Monitoring, Stackdriver, Datadog, etc.). Strong scripting and debugging skills to troubleshoot infrastructure or CI/CD failures. Proficient with GCP networking, IAM policies, folder/project structure, and Org Policy configuration. Nice to Have HashiCorp Certified: Terraform Associate or Architect. Familiarity with SRE principles (SLOs, error budgets, alerting). Exposure to FinOps strategies: cost controls, tagging policies, budget alerts. Experience with container orchestration (GKE/Kubernetes), Cloud Composer is a plus No Relocation support available Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Chennai

On-site

Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

West Bengal, India

On-site

Linkedin logo

Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Maharashtra, India

On-site

Linkedin logo

Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Responsibilities Testing of various vendors metering related software (BCS & CMRI) to be in line with business requirement Providing support to internal stakeholder (i.e vigilance, I&C, LTP2) on software updating. Evaluation and testing of new electricity meters onsite at vendors location and / or in laboratory. Identifying and reinstating IMTE instruments for proper functioning their by bring down overall repairing cost of instruments. Coordination in implementation of AMR based billing reading of HT, LTP2, EA & streetlight meters, their by reducing billing time, increasing billing efficiency & early/timely realization of revenue. Calculation of drift of reference instruments by monitoring calibration trend & history thus monitoring operation of instruments within specified limit. Calculating uncertainty of accredited scope. Performing mandatory quality assurance program thus assurance of trust in generated laboratory results reliability. Developing innovative ideas in given context of various stakeholders to enhance efficiency and brand image. Qualifications Qualification: BE / Btech – Electrical / Electronics Experience - 3 - 5 years Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Job Title: Flutter Developer Experience: 6+ Years Location: Kochi & Pollachi Notice Period: Immediate to Maximum 10 Days Key Responsibilities: Design, develop, and maintain high-quality mobile applications using Flutter. Translate functional requirements into responsive and efficient solutions. Follow Android recommended design principles, interface guidelines, and coding best practices. Deploy and manage applications on the Google Play Store, Apple App Store, and enterprise app stores. Collaborate with backend developers to define, integrate, and ship new features. Ensure optimal application performance, quality, and responsiveness. Take end-to-end ownership of the mobile solution lifecycle, from design through deployment and support. Technical Skills & Requirements: Deep expertise in Flutter and Dart programming language. Proficiency with Flutter state management solutions like Provider , Riverpod , and Flutter Hooks . Experience working with Flutter tools and libraries such as: Freezed, Dio, Shared Preferences, GoRouter, Drift (Moor) Strong understanding of RESTful APIs and integration of third-party SDKs/APIs . Experience with version control systems such as Git . Strong problem-solving abilities and keen attention to detail. Ability to work independently and deliver results in a fast-paced environment. Show more Show less

Posted 3 days ago

Apply

12.0 years

0 Lacs

Goa, India

On-site

Linkedin logo

If you are wondering what makes TRIMEDX different, it's that all of our associates share in a common purpose of serving clients, patients, communities, and each other with equal measures of care and performance. Everyone is focused on serving the customer and we do that by collaborating and supporting each other Associates look forward to coming to work each day Every associate matters and makes a difference It is truly a culture like no other — We hope you will join our team! Find out more about our company and culture here. TRIMEDX is an industry-leading, independent clinical asset management company delivering comprehensive clinical engineering services, clinical asset informatics, and medical device cybersecurity solutions to many of the largest health systems in the US. We help healthcare providers transform their clinical assets into strategic tools, driving reductions in operational expenses, optimizing clinical asset capital purchasing decisions and usage, improving caregiver satisfaction and productivity, maximizing resources for patient care, and delivering improved patient safety & protection. Health systems in the US spend, on average, 30% of their annual capital budget dollars on clinical assets, representing more than $200 billion of annual US sales, forecasted to grow at a CAGR of more than 5% in the next decade. Industry data solutions to assist providers in rationalizing their clinical asset utilization and purchasing decisions are limited, forcing providers to rely heavily on equipment manufacturers for advice and insight. TRIMEDX was built by providers, for providers, and leverages a history of expert clinical engineering with data on 90-95% of in-use medical equipment in the United States and an industry-leading data set of more than 6 million medical device records. A recent study by Fortune states that global healthcare asset management market is estimated to be $215B by 2032 with a CAGR of 25.3%. The United States is the largest single market for medical devices and accounts for about 40 percent of worldwide sales (BMI Research 2015). We are looking for the Chief Data Scientist who can accelerate our growth as a thought leader in Data Science, Research, and application of models to solve complex business problems. This person will lead the Company’s efforts to create a data science practice dedicated to harnessing this proprietary data set to support commercialization of novel new market solutions to enable providers to make informed decisions regarding their clinical asset investments and utilization. This role will be one of the most prominent voices in the global medical device industry. As the Chief Data Scientist, you will be responsible for leading TRIMEDX data and AI architecture and design to help our customers increase their clinical asset utilization and reduce cost to operate. You will design our enterprise-wide AI and Machine Learning initiatives. This new role will be instrumental in shaping and implementing our AI roadmap, driving innovation through advanced data modeling, and applying automation to optimize operations across diverse data ecosystems. It involves orchestrating Agentic AI across multiple SaaS infrastructures, including, but not limited to, Snowflake, Azure, ServiceNow, and Looker. In this role, you will work with senior executives and customers to arrive at solutions that significantly lift their business and accelerate the growth. Key Responsibilities You are a strategic leader that has defined enterprise AI/ML platforms with a strong focus on LLMs, generative AI, and predictive modeling. You will implement model-driven features from initial concept to production, covering all stages including model creation, evaluation, performance metrics, A/B testing, drift monitoring and self-correction with feedback loops. You will coach and train talented engineers in their career growth and set an example for data and AI organization. Your experience is building pricing models, forecasting, smart work assignment are preferred. You stay at the forefront of AI/ML research and emerging technologies; evaluate and integrate cutting-edge tools and frameworks. You will foster a culture of experimentation and continuous learning within the data science team. You will create high levels of engagement across teams in partnership with other key leaders within broader teams. Skills And Experience Minimum of 12 years of experience in computer science, data science, Statistics or related field. Experience preference in machine learning, data analytics or related disciplines with a focus on algorithmic product development. Deep expertise in statistics, econometrics, predictive analytics, and related disciplines. Working experience with modeling tools and languages such as Python, PyTorch, JAX, Tensorflow, SQL, and experience deploying production models on clouds platforms (AWS, Azure, GCP). Must have demonstrated experience leading data driven initiatives. Experience leading people is preferred. Demonstrated experience with multi-modal LLMs. Experience with A/B testing and cocreate MLOps practice for systemic application and maintenance of the models. Hands on experience deploying high impact, high Throughput, highly scalable, multi-modal ML models in Azure. Experience in data analysis using different types of datasets with statistics and predictive modeling foundations, including PC and foundation models. Experience creating patents and publications and / or speaking at top conferences such as CVPR (Computer Vision and Pattern Recognition Conference), IEEE (The Institute of Electrical and Electronics Engineers), SIGKDD (Special Interest Group on Knowledge Discovery and Data Mining), AAAI (Association for the Advancement of Artificial Intelligence), NeurIPS. Successful and proven experience to collaborate and deliver results in a fast paced, multifaceted, matrix environment. Proven success at working with abstract ideas and solving complex problems while driving collaboration across various teams. Ability to be hands-on when necessary as well as strategic is required. Excellent public speaking and presentation skills; strong written and verbal communication skills. Ability to travel up to 50%. Education And Qualifications Bachelor’s degree in Statistics, Economics or related field is required, or equivalent experience. Master’s degree or Ph.D. in Statistics or Economics is highly preferred. At TRIMEDX, we support and protect a culture where diversity, equity and inclusion are the foundation. We know it is our uniqueness and experiences that make a difference, drive innovation and create shared success. We create an inclusive workplace by actively seeking diversity, creating inclusion and driving equity and engagement. We embrace people’s differences which include age, race, color, ethnicity, gender, gender identity, sexual orientation, national origin, education, genetics, veteran status, disability, religion, beliefs, opinions and life experiences. Visit our website to view our full Diversity, Equity and Inclusion statement, along with our social channels to see what our team is up to: Facebook , LinkedIn , Twitter . TRIMEDX is an Equal Opportunity Employer. Drug-Free Workplace. Because we are committed to providing a safe and productive work environment, TRIMEDX is a drug-free workplace. Accordingly, Associates are prohibited from engaging in the unlawful manufacture, sale, distribution, dispensation, possession, or use of any controlled substance or marijuana, or otherwise being under the influence thereof, on all TRIMEDX and Customer property or during working/on-call hours. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description: Machine Learning Engineer - US Healthcare Claims Management Position: MLOps Engineer - US Healthcare Claims Management Location: Gurgaon, Delhi NCR, India Company: Neolytix About the Role: We are seeking an experienced MLOps Engineer to build, deploy, and maintain AI/ML systems for our healthcare Revenue Cycle Management (RCM) platform. This role will focus on operationalizing machine learning models that analyze claims, prioritize denials, and optimize revenue recovery through automated resolution pathways. Key Tech Stack: Models & ML Components: Fine-tuned healthcare LLMs (GPT-4, Claude) for complex claim analysis Knowledge of Supervised/Unsupervised Models, Optimization & Simulation techniques Domain-specific SLMs for denial code classification and prediction Vector embedding models for similar claim identification NER models for extracting critical claim information Seq2seq models (automated appeal letter generation) Languages & Frameworks: Strong proficiency in Python with OOP principles - Experience developing APIs using Flask or Fast API frameworks – Integration knowledge with front-end applications – Expertise in version control systems (e.g., GitHub, GitLab, Azure DevOps) - Proficiency in databases, including SQL, NoSQL and vector databases – Experience with Azure Libraries: PyTorch/TensorFlow/Hugging Face Transformers Key Responsibilities: ML Pipeline Architecture: Design and implement end-to-end ML pipelines for claims processing, incorporating automated training, testing, and deployment workflows Model Deployment & Scaling: Deploy and orchestrate LLMs and SLMs in production using containerization (Docker/Kubernetes) and Azure cloud services Monitoring & Observability: Implement comprehensive monitoring systems to track model performance, drift detection, and operational health metrics CI/CD for ML Systems: Establish CI/CD pipelines specifically for ML model training, validation, and deployment Data Pipeline Engineering: Create robust data preprocessing pipelines for healthcare claims data, ensuring compliance with HIPAA standards Model Optimization: Tune and optimize models for both performance and cost-efficiency in production environments Infrastructure as Code: Implement IaC practices for reproducible ML environments and deployments Document technical solutions & create best practices for scalable AI-driven claims management What Sets You Apart: Experience operationalizing LLMs for domain-specific enterprise applications Background in healthcare technology or revenue cycle operations Track record of improving model performance metrics in production systems What We Offer: Competitive salary and benefits package Opportunity to contribute to innovative AI solutions in the healthcare industry Dynamic and collaborative work environment Opportunities for continuous learning and professional growth To Apply: Submit your resume and a cover letter detailing your relevant experience and interest in the role to vidya@neolytix.com Powered by JazzHR aAuhzbQDU5 Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Overview Cvent’s Global Demand Center seeks a dynamic and experienced Assistant Team Lead for our Marketing Technology team. This role is pivotal in optimizing our “Go-to-Market” technology, account-based marketing, and personalization efforts. The successful candidate will specialize in advanced marketing technologies, ensuring alignment with business goals and enhancing the experience for prospects and customers through innovative solutions. At Cvent, you'll be part of a dynamic team that values innovation and creativity. You'll work with cutting- edge technology and help drive our go-to-market efforts to new heights. We want to hear from you if you are passionate about marketing technology and have a track record of driving success through innovative solutions. In This Role, You Will Manage our Go-to-Market Tech Stack: Elevate our “Go To Market” technology stack, including revenue marketing tech, ABM, and personalization tools. Implement and manage advanced marketing technologies such as 6sense and chat solutions. Own the technical implementation and ongoing management of new Go-To-Market tools. Integration and Implementation: Lead the charge in overseeing technical integrations across various marketing and sales platforms. Transform the chat experience for prospects and customers by ensuring seamless integration of chat solutions with other marketing tools. Optimize sales-facing systems like Reachdesk to align with business goals. Campaign Attribution and Reporting: Support and enhance campaign attribution strategies for better tracking and analysis. Develop and manage comprehensive reporting frameworks to measure the effectiveness of technology-driven marketing efforts. Create and maintain ABM dashboards, providing clear visibility into performance metrics. Performance Analysis and Improvement: Analyze chatbot performance and make data-driven improvements to enhance customer engagement. Lead efforts to improve the functionality and effectiveness of our marketing and sales enablement technologies. Leverage data-driven insights to inform decision-making and drive continuous improvement. Training and Support: Deliver impactful training on go-to-market tools and processes, ensuring the marketing team fully utilizes the capabilities of our tools. Support campaign attribution and reporting strategies, providing accurate and actionable data to stakeholders for informed decisions. Technical Expertise and Leadership: Serve as a technical expert, onboarding new technologies and optimizing the use of existing tools in our marketing technology stack. Guide the team in harnessing the full potential of our tech resources. Gap Identification and Requirement Development: Identify gaps and develop requirements for the automation of manual tasks to enhance marketing efficiency and effectiveness. Innovate solutions to streamline processes and drive productivity. Evaluation of New Technologies: Evaluate new marketing technologies, ensuring alignment with business objectives and staying ahead of industry trends. Here's What You Need Bachelor’s/Master’s degree in Marketing, Business, or a related field. Exceptional project management skills, including attention to detail, stakeholder engagement, project plan development, and deadline management with diverse teams. Deep experience with go-to-market tools like: ABM - 6sense, DemandBase Chat - Drift, Qualified, Avaamo Gifting - Reachdesk, Sendoso AI - ChatGPT, Microsoft Azure, Claude, Google Gemini, Glean, etc. Web - CHEQ, OneTrust iPaaS - Zapier, Tray.io, Informatica Skilled in crafting technical documentation and simplifying complex procedures. A minimum of 5 years of hands-on technical experience with marketing technologies like marketing automation platforms, CRM and database platforms Strong capacity for understanding and fulfilling project requirements and expectations. Excellent communication and collaboration skills, with a strong command of the English language. Self-motivated, analytical, eager to learn, and able to thrive in a team environment. Show more Show less

Posted 4 days ago

Apply

12.0 - 18.0 years

0 Lacs

Tamil Nadu, India

Remote

Linkedin logo

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less

Posted 4 days ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture, including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less

Posted 4 days ago

Apply

2.5 - 5.0 years

5 - 11 Lacs

India

On-site

We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

12.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Location : Ahmedabad Reported by: Team of 70 - (Software & Design, Product team and R&D (4) Reporting to : Promoter /Director JOB DETAILS : Domain & Experience Requirements 12+ years engineering; 5+ years Senior Technology Leadership (CTO, VP Engg, CIO). Demonstrated success in healthcare / wellness / fitness SaaS or medical-device Integration Role & Responsibilities: Role Summary Own the end-to-end technology and information stack for our cloud-native HMIS / Tele-health SaaS platform. Provide visionary architectural leadership (system, application, cloud, and data), harden security & compliance. Create a hands-on, jovial, ownership-driven engineering culture, and deliver new AI-powered products at startup velocity—while scaling to the level of a nationwide public-health utility. Key Responsibilities Architecture Leadership – Define & evolve System, Application, Cloud and Database architectures for a multi-tenant, high-availability SaaS platform; maintain direct involvement in the product roadmap. Cloud & Infrastructure – Own cloud cost, performance and DR across two GCP regions; manage internal IT (laptops, SASE, MDM) with zero-trust controls. AI / ML Enablement – Embed AI/ML capabilities (ASR + RAG summarisation, anomaly detection) into HMIS modules; evaluate and productionise new models. Security, Risk & Compliance – Lead ISO 27001, SOC 2, HIPAA, NABH compliance; enforce DevSecOps, threat-modelling, pen-testing and vulnerability management. Product Documentation & SDLC – Set up and enforce a seamless SDLC establish and audit SOPs; ensure every service, run-book and recovery plan is documented. People & Culture – Foster a culture of innovation, collaboration and continuous improvement; keep teams motivated and jovial; coach by example on the dev floor. Pre-Sales & Client Engagement – Partner with the sales team to design solution demos, proofs-of-concept and technical bid responses; engage directly with C-suite stakeholders at prospective hospitals to gather requirements and position the platform’s value proposition. Stakeholder & Vendor Management – Translate clinical requirements into epics; present tech OKRs to the Board; manage cloud, AI and medical-device vendors. Technical Expertise & Expectations Holistic Architecture Leadership – Shape system, application, cloud, and data architectures that stay performant, maintainable, and cost-eIicient as the platform scales. SaaS Platform Stewardship – Guide the evolution of a multi-tenant, always-on health-technology product, balancing feature delivery with platform reliability. Hands-On Engineering – Stay close to the code: review pull requests, troubleshoot production issues, and mentor engineers through practical example. Product Partnership – Convert business and clinical requirements into clear technical roadmaps and measurable engineering objectives. AI / ML Awareness – Identify pragmatic opportunities to embed data-driven and AI capabilities that enhance clinical workflows and user experience. Process & SDLC Ownership – Establish robust DevSecOps, CI/CD, infrastructure-as-code, and documentation practices that keep releases predictable and secure. Security, Risk & Compliance Oversight – Maintain a proactive security posture, comprehensive SOPs, and continuous compliance with relevant healthcare and data-protection standards. Health Interoperability Standards – Knowledge of FHIR, DICOM, SNOMED CT, HL7, and related standards is highly desirable. Technology Foresight – Monitor emerging trends, assess their relevance, and pilot new tools or patterns that can strengthen the platform. Embedded Systems & Hardware Insight – Knowledge of firmware, IoT, or medical-device hardware development is seen as a distinguishing factor. Personal Qualities Ownership mentality – treats uptime, cost and code quality as personal responsibilities. Methodical planner – works to clear quarterly and sprint plans; avoids scope drift. Visible, hands-on leader – is present on the dev floor; white-boards solutions; joins incident calls. Jovial motivator – energises stand-ups, celebrates wins, runs hack-days. Qualification : Education: Bachelor's or Master's degree in Computer Science, Engineering, or related field. MBA or advanced healthcare-related qualification is a plus. Please expedite and send the updated resume along with confirmation of interest. Regards, Pooja Raval - Sr. Consultant / TL Send CV and Reply mail to: unitedtechit@uhr.co.in; Will call you for detail discussion, if your Profile has relevant experience. Show more Show less

Posted 4 days ago

Apply

0.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent

Posted 4 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role Overview As a Test Automation Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX) , backend APIs , and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy , fairness , stability , and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs. Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps). Requirements Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

About Us: Waltcorp is at the forefront of cloud engineering, helping businesses transform their operations by leveraging the power of Google Cloud Platform (GCP) . We are seeking a skilled and visionary GCP DevOps Solutions Architect – ML/AI Focus to design and implement cloud solutions that address our clients' complex business challenges. Key Responsibilities: Solution Design: Collaborate with stakeholders to understand business requirements and design scalable, secure, and high-performing GCP cloud architectures . Technical Leadership: Serve as a technical advisor, guiding teams on GCP best practices, services, and tools to optimize performance, security, and cost efficiency. Infrastructure Development: Architect and oversee the deployment of cloud solutions using GCP services such as Compute Engine, Cloud Storage, Cloud Functions, Cloud SQL , and more. Infrastructure as Code (IaC) & Cloud Automation: Design, implement, and manage infrastructure using Terraform, Google Cloud Deployment Manager , or Pulumi . Automate provisioning of compute, storage, and networking resources using GCP services like Compute Engine, Cloud Storage, VPC, IAM, GKE (Google Kubernetes Engine), Cloud Run . Implement and maintain CI/CD pipelines (using Cloud Build, Jenkins, GitHub Actions , or GitLab CI ). ML Model Deployment & Automation (MLOps): Build and optimize end-to-end ML pipelines using Vertex AI Pipelines, Kubeflow , or MLflow . Automate training, testing, validation, and deployment of ML models in staging and production environments. Support model versioning, reproducibility, and lineage tracking using tools like DVC, Vertex AI Model Registry , or MLflow . Monitoring & Logging: Implement monitoring for both infrastructure and ML workflows using Cloud Monitoring, Prometheus, Grafana, Vertex AI Model Monitoring . Set up alerting for anomalies in ML model performance (data drift, concept drift). Ensure application logs, model outputs, and system metrics are centralized and accessible. Containerization & Orchestration: Containerize ML workloads using Docker and orchestrate using GKE or Cloud Run . Optimize resource usage through autoscaling and right-sizing of ML workloads in containers. Data & Experiment Management: Integrate with data versioning tools (e.g., DVC or LakeFS ) to track datasets used in model training. Enable experiment tracking using MLflow, Weights & Biases , or Vertex AI Experiments . Support reproducible research and automated experimentation pipelines. Client Engagement: Communicate complex technical solutions to non-technical stakeholders and deliver high-level architectural designs, presentations, and proposals. Integration and Migration: Plan and execute cloud migration strategies, integrating existing on-premises systems with GCP infrastructure . Security and Compliance: Implement robust security measures, including IAM policies, encryption, and monitoring , to ensure compliance with industry standards and regulations. Documentation: Develop and maintain detailed technical documentation for architecture designs, deployment processes, and configurations. Continuous Improvement: Stay current with GCP advancements and emerging trends , recommending updates to architecture strategies and tools. Qualifications: Educational Background: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 3+ years of experience in cloud architecture, with a focus on GCP . Technical Expertise: Strong knowledge of GCP core services , including compute, storage, networking, and database solutions. Proficiency in Infrastructure as Code (IaC) tools like Terraform , Deployment Manager , or Pulumi . Experience with containerization and orchestration tools (e.g., Docker , Kubernetes , GKE , or Cloud Run ). Understanding of DevOps practices, CI/CD pipelines, and automation . Strong command of networking concepts such as VPCs, load balancing , and firewall rules . Familiarity with scripting languages like Python or Bash . Preferred Qualifications: Google Cloud Certified – Professional Cloud Architect or Professional DevOps Engineer . Expertise in engineering and maintaining MLOps and AI applications . Experience in hybrid cloud or multi-cloud environments . Familiarity with monitoring and logging tools such as Cloud Monitoring, ELK Stack , or Datadog . [CLOUD-GCDEPS-J25] Show more Show less

Posted 5 days ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Salary range: Rs 450000 - Rs 550000 (ie INR 4.5-5.5 LPA) Min Experience: 1 years JobType: full-time We are looking for a research-oriented Environmental Data Scientist to spearhead the development of advanced algorithms that improve the accuracy, reliability, and overall performance of air quality sensor data. This role is focused on addressing real-world challenges in environmental sensing, such as sensor drift, cross-interference, and anomalous data behavior — going beyond conventional data science applications. Requirements Key Responsibilities: Develop and implement algorithms for sensor calibration, signal correction, anomaly detection, and cross-interference mitigation to improve air quality data accuracy and stability. Conduct research on sensor behavior and environmental impacts to guide algorithm design. Collaborate with software and embedded systems teams to integrate algorithms into cloud and edge environments. Analyze large-scale environmental datasets using Python, R, or similar data analysis tools. Validate and refine algorithms using both laboratory and field data through iterative testing. Create visualization tools and dashboards to interpret sensor behavior and assess algorithm effectiveness. Support environmental research initiatives with data-driven statistical analysis. Document methodologies, test results, and findings for internal knowledge sharing and system improvement. Contribute to team efforts by writing clean, efficient code and assisting in overcoming programming challenges. Required Skills & Qualifications: Bachelor's or Master's degree in Environmental Engineering, Environmental Science, Chemical Engineering, Electronics/Instrumentation Engineering, Computer Science, Data Science, Physics, or Atmospheric Science with a focus on data/sensing. 1-2 years of experience working with sensor data or IoT-based environmental monitoring systems. Strong background in algorithm development, signal processing, and statistical data analysis. Proficiency in Python (e.g., pandas, NumPy, scikit-learn) or R, with experience managing real-world sensor datasets. Ability to design and deploy models in cloud-based or embedded environments. Strong analytical thinking, problem-solving skills, and effective communication abilities. Genuine interest in environmental sustainability and clean technologies. Preferred Qualifications: Familiarity with time-series anomaly detection, signal noise reduction, sensor fusion, or geospatial data processing. Exposure to air quality sensor technology, environmental sensor datasets, or dispersion modeling Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating. Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Benefits Generous time off policies Top Shelf Benefits Education, wellness and lifestyle support Show more Show less

Posted 5 days ago

Apply

Exploring Drift Jobs in India

The drift job market in India is rapidly growing, with an increasing demand for professionals skilled in this area. Drift professionals are sought after by companies looking to enhance their customer service and engagement through conversational marketing.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

Average Salary Range

The average salary range for drift professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 10 lakhs per annum.

Career Path

A typical career path in the drift domain may progress from roles such as Junior Drift Specialist or Drift Consultant to Senior Drift Specialist, Drift Manager, and eventually reaching the position of Drift Director or Head of Drift Operations.

Related Skills

In addition to expertise in drift, professionals in this field are often expected to have skills in customer service, marketing automation, chatbot development, and data analytics.

Interview Questions

  • What is conversational marketing? (basic)
  • How would you handle a customer complaint through a drift chatbot? (medium)
  • Can you explain a scenario where you successfully implemented drift for a client? (medium)
  • What are some common challenges faced in drift implementation and how do you overcome them? (advanced)
  • How do you measure the success of a drift campaign? (medium)
  • Explain the importance of personalization in drift marketing. (medium)
  • How do you ensure compliance with data privacy regulations when using drift? (advanced)
  • What strategies would you implement to increase customer engagement through drift? (medium)
  • Can you provide examples of drift integrations with other marketing tools? (advanced)
  • How do you stay updated on the latest trends and developments in drift technology? (basic)
  • Describe a situation where you had to troubleshoot a technical issue in a drift chatbot. (medium)
  • How do you handle leads generated through drift to ensure conversion? (medium)
  • What are some best practices for setting up drift playbooks? (medium)
  • How do you customize drift for different target audiences? (medium)
  • Explain the difference between drift and traditional marketing methods. (basic)
  • Can you give an example of a successful drift campaign you were involved in? (medium)
  • How do you ensure a seamless transition between drift and human agents in customer interactions? (medium)
  • What metrics do you track to measure the effectiveness of a drift chatbot? (medium)
  • How do you handle negative feedback received through drift interactions? (medium)
  • What are the key components of a successful drift strategy? (medium)
  • How do you handle a high volume of customer inquiries through drift? (medium)
  • Explain the role of AI in drift marketing. (medium)
  • How do you ensure that drift chatbots are providing accurate information to customers? (medium)
  • Describe a situation where you had to customize drift to meet specific client requirements. (advanced)

Closing Remark

As you prepare for a career in drift jobs in India, remember to showcase your expertise, experience, and passion for conversational marketing. Stay updated on industry trends and technologies to stand out in the competitive job market. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies