Jobs
Interviews

666 Drift Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Quality Engineer on the Marketing Datahub Squad, you’ll join a team of passionate professionals dedicated to building and supporting BCG’s next-generation data analytics foundation. Your work will enable personalized customer journeys and empower data-driven decisions by ensuring our analytics platform is stable, scalable, and reliable. The incumbent for this role will help improve and champion data quality and integrity throughout the data lake and other external systems. The candidate must be detail oriented, open-minded and interested in continuous learning, while being curious and unafraid to ask questions. He/She must be willing to innovate and initiate change, discover fresh solutions and present innovative ideas while driving towards increased test automation. He/She must work well in a global team environment and collaborate well with peers and stakeholders Champion data quality across our end-to-end pipeline: from various ingestion sources into Snowflake, through various transformations, to downstream analytics and reporting. Performing integration and regression testing to ensure all system components work successfully together Design, execute and automate test plans for various ETL solutions to ensure each batch and streaming job delivers accurate, timely data. Develop and monitor checks via dbt tests and other tools that surface schema drift, record counts mismatches, null anomalies and other integrity issues. Track and manage defects in JIRA, work collaboratively with Product Owner, Analysts and Data Engineers to prioritize and resolve critical data bugs. Maintain test documentation including test strategies, test cases and run-books, ensuring clarity for both technical and business stakeholders. Continuously improve our CI/CD pipelines (GitHub Actions) by integrating data quality gates and enhancing deployment reliability. What You'll Bring Agile SDLC & Testing Life Cycle: proven track record testing in agile environments with distributed teams. Broad testing expertise: hands-on experience in functional, system, integration and regression testing—applied specifically to data/ETL pipelines. Data platform tools: practical experience with Snowflake, dbt and Fivetran for building, transforming and managing analytic datasets. Cloud Technologies: Familiarity with AWS services (Lambda, Glue jobs and other AWS data stack components) and Azure, including provisioning test environments and validating cloud-native data processes. SQL mastery: ability to author and optimize complex queries to validate transformations, detect discrepancies and generate automated checks. Pipeline validation: testing Data Lake flows (ingest/extract), backend API services for data push/pull, and any data access or visualization layers. Defect Management: using JIRA for logging, triaging and reporting on data defects, and Confluence for maintaining test docs and KPIs. Source control & CI/CD: hands-on with Git for branching and code reviews; experience integrating tests into Jenkins or GitHub Actions. Test Planning & Strategy: help define the scope, estimates, development of test plans, test strategies, and test scripts through the iterations to ensure a quality product. Quality Metrics & KPIs: Tracking and presenting KPIs for testing efforts, such as test coverage, gaps, hotfixes, and defect leakages. Automation: Experience writing end-to-end and/or functional integration automated tests using relevant testing automation frameworks Additional info YOU’RE GOOD AT Data-focused testing: crafting and running complex SQL-driven validations, cross-environment comparisons and sample-based checks in complex pipelines. Automation mindset: Identifying and implementing testing automation solutions for regression, monitoring and efficiency purposes. Collaboration: partnering effectively with Data Engineers, Analytics, BI and Product teams to translate requirements into testable scenarios to ensure a quality product. Being a team player, open, pleasure to work with and positive in a group dynamic, ability to work collaboratively in virtual teams and someone who is self-starter and highly proactive. Agile delivery: adapting to fast-moving sprints, contributing to sprint planning, retrospectives and backlog grooming. Proactivity: spotting gaps in coverage, proposing new test frameworks or tools, and driving adoption across the squad. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Role Overview: Join our mission to ensure AI agents are safe, reliable, and trustworthy for enterprise deployment. You'll focus on developing tests that evaluate agent behavior, identify edge cases, and ensure compliance with safety standards. Key Responsibilities: Design and execute safety testing protocols for AI agents Develop adversarial testing strategies to identify agent vulnerabilities Create test cases for hallucination detection, bias evaluation, and toxic output prevention Build automated monitoring for agent drift and performance degradation Test agent behavior under resource constraints and failure scenarios Evaluate agent compliance with industry-specific regulations Document safety issues and work with developers on mitigation strategies Required Qualifications: 5+ years in software testing with focus on security or safety-critical systems Experience with AI/ML model testing and evaluation Strong analytical and problem-solving skills Knowledge of AI ethics and responsible AI principles Experience with security testing tools and methodologies Excellent documentation and communication skills Preferred Qualifications: Background in cybersecurity or safety engineering Experience with red team/blue team exercises Knowledge of formal verification methods Familiarity with AI incident databases and failure analysis

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Purpose Manage Electro-mechanical and Fire & Safety operations to ensure the quality and deliverables in a timely and cost effective manner at all office locations of Bangalore; Role would be responsible to build, deploy, and scale machine learning and AI solutions across GMR’s verticals. This role will build and manage advanced analytics initiatives, predictive engines, and GenAI applications — with a focus on business outcomes, model performance, and intelligent automation. Reporting to the Head of Automation & AI, you will operate in a high-velocity, product-oriented environment with direct visibility of impact across airports, energy, infrastructure and enterprise functions ORGANISATION CHART Key Accountabilities Accountabilities Key Performance Indicators AI & ML Development Build and deploy models using supervised, unsupervised, and reinforcement learning techniques for use cases such as forecasting, predictive scenarios, dynamic pricing & recommendation engines, and anomaly detection, with exposure to broad enterprise functions and business Lead development of models, NLP classifiers, and GenAI-enhanced prediction engines. Design and integrate LLM-based features such as prompt pipelines, fine-tuned models, and inference architecture using Gemini, Azure OpenAI, LLama etc. Program Plan Vs Actuals End-to-End Solutioning Translate business problems into robust data science pipelines with emphasis on accuracy, explainability, and scalability. Own the full ML lifecycle — from data ingestion and feature engineering to model training, evaluation, deployment, retraining, and drift management. Program Plan Vs Actuals Cloud , ML & data Engineering Deploy production-grade models using AWS, GCP, or Azure AI platforms and orchestrate workflows using tools like Step Functions, SageMaker, Lambda, and API Gateway. Build and optimise ETL/ELT pipelines, ensuring smooth integration with BI tools (Power BI, QlikSense or similar) and business systems. Data compression and familiarity with cloud finops will be an advantage, have used some tools like kafka, apache airflow or similar 100% compliance to processes KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS Other Dimensions EDUCATION QUALIFICATIONS Engineering Relevant Experience 5 - 8years of hands-on experience in machine learning, AI engineering, or data science, including deploying models at scale. Strong programming and modelling skills in some like Python, SQL, and ML frameworks like scikit-learn, TensorFlow, XGBoost, PyTorch. Demonstrated ability to build models using supervised, unsupervised, and reinforcement learning techniques to solve complex business problems. Technical & Platform Skills Proven experience with cloud-native ML tools: AWS SageMaker, Azure ML Studio, Google AI Platform. Familiarity with DevOps and orchestration tools: Docker, Git, Step Functions, Lambda,Google AI or similar Comfort working with BI/reporting layers, testing, and model performance dashboards. Mathematics and Statistics Linear algebra, Bayesian method, information theory, statistical inference, clustering, regression etc Collaborate with Generative AI and RPA teams to develop intelligent workflows Participate in rapid prototyping, technical reviews, and internal capability building NLP and Computer Vision Knowledge of Hugging Face Transformers, Spacy or similar NLP tools YoLO, Open CV or similar for Computer vision. COMPETENCIES Personal Effectiveness Social Awareness Entrepreneurship Problem Solving & Analytical Thinking Planning & Decision Making Capability Building Strategic Orientation Stakeholder Focus Networking Execution & Results Teamwork & Interpersonal influence

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

What does day-to-day look like: Design, develop, and maintain infrastructure-as-code projects using CDK for Terraform (CDKTF) with TypeScript. Write reusable, modular TypeScript constructs that represent Terraform resources and modules. Manage Terraform state and lifecycle effectively within CDKTF projects. Use Terraform CLI commands such as terraform init, terraform validate, terraform plan, and terraform apply to deploy infrastructure changes. Write Jest-based unit tests to validate CDKTF constructs and configurations. Collaborate with DevOps and cloud engineering teams to deliver automated, reliable, and scalable infrastructure. Troubleshoot and resolve issues related to Terraform state, drift, and deployment failures. Maintain clear documentation of infrastructure code and deployment processes. Keep up-to-date with Terraform and CDKTF ecosystem improvements and best practices. Requirements: Strong proficiency with CDK for Terraform (CDKTF) using TypeScript. Solid knowledge of TypeScript fundamentals: interfaces, classes, modules, and typing. Hands-on experience writing Terraform configurations and resource blocks programmatically via TypeScript in CDKTF. Understanding of Terraform core concepts such as state management, modules, and providers. Ability to debug TypeScript CDKTF code, write unit tests using Jest, and ensure high code quality. Experience with Terraform CLI commands (terraform init, terraform validate, terraform plan, terraform apply). Familiarity with infrastructure-as-code best practices and automation workflows. Comfortable troubleshooting Terraform state issues and resource lifecycle management. Knowledge of cloud infrastructure provisioning and the Terraform ecosystem. Perks of Freelancing: Work in a fully remote environment. Opportunity to work on cutting-edge AI projects with leading LLM companies. Potential for contract extension based on performance and project needs. Offer Details: Commitments Required: at least 4 hours per day and minimum 20/30/40 hours per week with 4 hours overlap with PST Employment type: Contractor position (no medical/paid leave) Duration of contract: 1 month; [expected start date is next week]

Posted 2 weeks ago

Apply

18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Brief House of Shipping provides business consultancy and advisory services for Shipping & Logistics companies. House of Shipping's commitment to their customers begins with developing an understanding of their business fundamentals. We are hiring on behalf of one of our key US based client - a globally recognized service provider of flexible and scalable outsourced warehousing solutions, designed to adapt to the evolving demands of today’s supply chains. Currently House of Shipping is looking to identify a high caliber Data Science Lead . This position is an on-site position for Hyderabad . Background and experience: 15–18 years in data science, with 5+ years in leadership roles Proven track record in building and scaling data science teams in logistics, e-commerce, or manufacturing Strong understanding of statistical learning, ML architecture, productionizing models, and impact tracking Job purpose: To lead enterprise-scale data science initiatives in supply chain optimization, forecasting, network analytics, and predictive maintenance. This role blends technical leadership with strategic alignment across business units and manages advanced analytics teams to deliver measurable business impact. Main tasks and responsibilities: Define and drive the data science roadmap across forecasting (demand, returns), route optimization, warehouse simulation, inventory management, and fraud detection Architect end-to-end pipelines with engineering teams: from data ingestion, model development, to API deployment Lead the design and deployment of ML models using Python (Scikit-Learn, XGBoost, PyTorch, LightGBM), and MLOps tools like MLflow, Vertex AI, or AWS SageMaker Collaborate with operations, product, and technology to prioritize AI use cases and define business metrics Manage experimentation frameworks (A/B testing, simulation models) and statistical hypothesis testing Mentor team members in model explainability, interpretability, and ethical AI practices Ensure robust model validation, drift monitoring, retraining schedules, and version control Contribute to organizational data maturity: feature stores, reusable components, metadata tracking Own team hiring, capability development, project estimation, and stakeholder presentations Collaborate with external vendors, universities, and open-source projects where applicable Education requirements: Bachelor’s or Master’s or PhD in Computer Science, Mathematics, Statistics, Operations Research Preferred: Certifications in Cloud ML stacks (AWS/GCP/Azure), MLOps, or Applied AI Competencies and skills: Strategic vision in AI applications across supply chain Team mentorship and delivery ownership Expertise in statistical and ML frameworks MLOps pipeline management and deployment best practices Strong business alignment and executive communication

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description We are hiring a quality-focused AI Tester to design and execute validation frameworks for AI models and systems. Reporting to the AI Manager, this role ensures that AI solutions meet accuracy, performance, reliability, and ethical compliance benchmarks before production deployment. Key Responsibilities: Develop and execute test plans and cases for AI/ML models, GenAI solutions, and agent-based systems. Validate model outputs for accuracy, precision, recall, latency, and interpretability across diverse datasets. Conduct bias testing, data drift analysis, and adversarial robustness validation. Collaborate with AI developers, data scientists, and business analysts to define acceptance criteria. Automate testing pipelines and integrate with CI/CD environments where applicable. Maintain traceability matrices, defect logs, and model validation documentation. Qualifications Required Qualifications: 3–5 years of experience in software testing or QA, with at least 1–2 years in AI/ML or data-centric testing. Strong understanding of AI model evaluation metrics, data sampling, and validation techniques. Experience using testing frameworks (e.g., PyTest, Great Expectations) and visualization tools. Familiarity with Python, JSON, and REST APIs. Bachelor’s degree in Computer Science, Statistics, or a related field. Preferred Skills: Experience testing LLMs, GenAI prompts, or RAG workflows. Exposure to responsible AI frameworks and model audit practices. ISTQB or other QA certifications are a plus.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: QA Engineer – Generative AI Testing Location: [Hybrid – Hyderabad/Vishakhapatnam] Job Type: [Full-time] Job Summary: We are seeking a skilled and detail-oriented QA Engineer with expertise in Generative AI Testing to join our team. In this role, you will be responsible for ensuring the accuracy, safety, and performance of AI-driven applications, particularly in the field of Generative AI. You will design and execute test strategies, develop automation frameworks, and validate AI outputs to maintain high-quality user experiences. Key Responsibilities: Design, develop, and execute comprehensive test plans for Generative AI models and applications. Validate AI-generated outputs for accuracy, coherence, bias, ethical considerations, and alignment with business requirements. Develop and maintain automated testing frameworks for AI-based applications, ensuring scalability and efficiency. Perform adversarial testing, edge case analysis, and security testing to assess AI vulnerabilities. Collaborate with AI/ML engineers, product managers, and developers to refine model performance and mitigate errors. Monitor AI model drift and ensure consistent performance across different inputs and datasets. Conduct performance testing to evaluate response times, scalability, and reliability of AI systems. Document test cases, test results, and defects while ensuring compliance with AI governance and ethical guidelines. Continuously explore new tools and methodologies to improve AI testing processes. Requirements: Bachelors or Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in Quality Assurance , with at least 1-2 years in AI/ML testing or Generative AI applications. Strong understanding of AI/ML concepts, NLP models, LLMs, and deep learning architectures. Experience with AI testing tools such as LangTest, DeepChecks, Trulens, or custom AI evaluation frameworks . Proficiency in test automation using Python, Selenium, PyTest, or similar frameworks . Familiarity with model evaluation metrics such as BLEU, ROUGE, perplexity, and precision-recall for AI-generated content. Knowledge of bias detection, adversarial testing, and ethical AI considerations. Experience working with APIs, cloud platforms (AWS, Azure, GCP), and MLOps practices. Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills to work in cross-functional teams. Nice to Have: Experience in testing AI chatbots, voice assistants, or image/video-generating AI. Knowledge of LLM fine-tuning and reinforcement learning from human feedback (RLHF). Exposure to regulatory and compliance frameworks for AI governance. Why Join Us? Opportunity to work on cutting-edge AI products. Collaborate with a team of AI and software experts. Competitive salary, benefits, and career growth opportunities. If you’re interested, Please share your resume with Dkadam@eprosoft.com

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description We are looking for a proactive and detail-oriented AI OPS Engineer to support the deployment, monitoring, and maintenance of AI/ML models in production. Reporting to the AI Developer, this role will focus on MLOps practices including model versioning, CI/CD, observability, and performance optimization in cloud and hybrid environments. Key Responsibilities: Build and manage CI/CD pipelines for ML models using platforms like MLflow, Kubeflow, or SageMaker. Monitor model performance and health using observability tools and dashboards. Ensure automated retraining, version control, rollback strategies, and audit logging for production models. Support deployment of LLMs, RAG pipelines, and agentic AI systems in scalable, containerized environments. Collaborate with AI Developers and Architects to ensure reliable and secure integration of models into enterprise systems. Troubleshoot runtime issues, latency, and accuracy drift in model predictions and APIs. Contribute to infrastructure automation using Terraform, Docker, Kubernetes, or similar technologies. Qualifications Required Qualifications: 3–5 years of experience in DevOps, MLOps, or platform engineering roles with exposure to AI/ML workflows. Hands-on experience with deployment tools like Jenkins, Argo, GitHub Actions, or Azure DevOps. Strong scripting skills (Python, Bash) and familiarity with cloud environments (AWS, Azure, GCP). Understanding of containerization, service orchestration, and monitoring tools (Prometheus, Grafana, ELK). Bachelor’s degree in computer science, IT, or a related field. Preferred Skills: Experience supporting GenAI or LLM applications in production. Familiarity with vector databases, model registries, and feature stores. Exposure to security and compliance standards in model lifecycle management

Posted 2 weeks ago

Apply

1.0 - 2.0 years

1 - 5 Lacs

Gurgaon

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About the role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab). #LI-DS1

Posted 2 weeks ago

Apply

0 years

4 - 16 Lacs

Gurgaon

On-site

About the Role We are seeking an experienced Senior DevOps/MLOps Engineer to lead and manage a high-performing engineering team. You will oversee the deployment and scaling of machine learning models and backend services using modern DevOps and MLOps practices. Proficiency in FastAPI , Docker , Kubernetes , and CI/CD is essential. Key Responsibilities Team Leadership : Guide and manage a team of DevOps/MLOps engineers. FastAPI Deployment : Optimize, containerize, and deploy FastAPI applications at scale. Infrastructure as Code (IaC) : Use tools like Terraform or Helm to manage infrastructure. Kubernetes Management : Handle multi-environment Kubernetes clusters (GKE, EKS, AKS, or on-prem). Model Ops : Manage ML model lifecycle: versioning, deployment, monitoring, and rollback. CI/CD Pipelines : Design and maintain robust pipelines for model and application deployment. Monitoring & Logging : Set up observability tools (Prometheus, Grafana, ELK, etc.). Security & Compliance : Ensure secure infrastructure and data pipelines. Required Skills FastAPI : Deep understanding of building, scaling, and securing APIs. Docker & Kubernetes : Expert-level experience in containerization and orchestration. CI/CD Tools : GitHub Actions, GitLab CI, Jenkins, ArgoCD, or similar. Cloud Platforms : AWS/GCP/Azure. Python : Strong scripting and automation skills. ML Workflow Tools (preferred): MLflow, DVC, Kubeflow, or Seldon. Preferred Qualifications Experience in managing hybrid cloud/on-premise deployments. Strong communication and mentoring skills. Understanding of data pipelines, feature stores, and model drift monitoring. Job Types: Full-time, Permanent Pay: ₹426,830.06 - ₹1,653,904.80 per year Work Location: In person Speak with the employer +91 9867786230

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

inFeedo is a fast-growing, AI-led enterprise focused on transforming the employee experience through human-centric technology. As data and intelligent systems become core to our mission, we’re seeking a seasoned Risk Manager to build and oversee our frameworks around data privacy risk, AI governance and risk management, third-party risk, and information security. This role will be an integral part of our Data Privacy, Risk, and Compliance Team. No. of positions: 1 What will you be doing? 🌐 Enterprise Risk Management Design and implement enterprise risk frameworks tailored to a high-growth SaaS environment, and in line with global standards. Partner with business units and product teams to embed risk-aware decision-making. 🔐 Data Privacy & Information Security Oversee compliance and security standards (e.g., ISO 27001, SOC 2, NIST CSF, GDPR, DPDP, etc.). Conduct privacy impact assessments and data classification audits. Guide data lifecycle policies and secure data handling practices. 🤖 AI/MLRisk Establish controls and review mechanisms for fairness, explainability, model drift, and systemic AI risk. Support internal AI ethics boards or review councils. Ensure compliance with emerging AI regulations (e.g., EU AI Act, NIST AIRMF). 🧩 Third-Party & Vendo r RiskPerform risk assessments for third-party tools and data processors. Implement contractual clauses and SLAs that uphold compliance and security. ⚙️ Operational Risk & Incident Response Lead tabletop exercises, red teaming simulations, and post-incident reviews with relevant stakeholders. Collaborate with the Security Engineer and Legal for incident handling and reporting. Who will you work with? Varun, Seema, and of course the rest of the jovial inFee do team. Ideal Profile : 6–7 years of experience in data governance, AI/ML risk, cybersecurity, or risk management roles. Strong grounding in global frameworks: NIST CSF, NIST AI RMF, ISO 27001/27701, SOC2, GDPR, DPDP. Prior experience working with security architects, ML engineers, and compliance teams. Certifications such as CIPT, CISA, CRISC, ISO 27001 LA, or AI Governance programs are a plus. Comfortable working with cross-functional stakeholders, with the ability to influence without authority. Strong inclination to learn and adapt to new technologies. Bonus if you've led risk functions in SaaS or high-scale digital-first organizations. Our expectations before you click Apply Now” Read about inFeedo& Amber We are an equal-opportunity employer and value diversity at inFeedo. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or education. [Attitude>Skills >Education]

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements "Statistical Modeling: Develop and implement core statistical models, including linear and logistic regression, decision trees, and various classification algorithms. Analyze and interpret model outputs to inform business decisions. Advanced NLP: Work on complex NLP tasks, including data cleansing, text preprocessing, and feature engineering. Develop models for text classification, sentiment analysis, and entity recognition. LLM Integration: Design and optimize pipelines for integrating Large Language Models (LLMs) into applications, with a focus on Retrieval-Augmented Generation (RAG) systems. Work on fine-tuning LLMs to enhance their performance on domain-specific tasks. ETL Processes: Design ETL (Extract, Transform, Load) processes to ensure that data is accurately extracted from various sources, transformed into usable formats, and loaded into data warehouses or databases for analysis. BI Reporting and SQL: Collaborate with BI teams to ensure that data pipelines support efficient reporting. Write complex SQL queries to extract, analyze, and visualize data for business intelligence reports. Ensure that data models are optimized for reporting and analytics. Data Storage and Management: Collaborate with data engineers to design and implement efficient storage solutions for structured datasets and semi structured text datasets. Ensure that data is accessible, well-organized, and optimized for retrieval. Model Evaluation and Optimization: Regularly evaluate models using appropriate metrics and improve them through hyperparameter tuning, feature selection, and other optimization techniques. Deploy models in production environments and monitor their performance. Collaboration: Work closely with cross-functional teams, including software engineers, data engineers, and product managers, to integrate models into applications and ensure they meet business requirements. Innovation: Stay updated with the latest advancements in machine learning, NLP, and data engineering. Experiment with new algorithms, tools, and frameworks to continuously enhance the capabilities of our models and data processes." Qualifications Overall Experience: 5+ years of overall experience working in a modern software engineering environment with exposure to best practices in code management, devops and cloud data/ML engineering. Proven track record of developing and deploying machine learning models in production. ML Experience: 3+ years of experience in machine learning engineering, data science with a focus on fundamental statistical modeling. Experience in feature engineering, basic model tuning and understanding model drift over time. Strong foundations in statistics for applied ML. Data Experience: 1+ year(s) in building data engineering ETL processes, and BI reporting. NLP Experience: 1+ year(s) of experience working on NLP use cases, including large scale text data processing, storage and fundamental NLP models for text classification, topic modeling and/or more recently, LLM models and their applications Core Technical Skills: Proficiency in Python and relevant ML and/or NLP specific libraries. Strong SQL skills for data querying, analysis, and BI reporting. Experience with ETL tools and data pipeline management. BI Reporting: Experience in designing and optimizing data models for BI reporting, using tools like Tableau, Power BI, or similar. Education: Bachelor’s or Master’s degree in Computer Science / Data Science,

Posted 2 weeks ago

Apply

0 years

7 - 8 Lacs

Hyderābād

Remote

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant - ML/CV Ops Engineer ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision models—ensuring they are efficiently trained, deployed, monitored , and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow , Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe , Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT , or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Qualifications we seek in you! Minimum Qualifications Bachelor’s or Master’s in Computer Science , Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch , TensorFlow, OpenCV, and model lifecycle tools like MLflow , DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT , DeepStream ). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD ). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 16, 2025, 3:14:00 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 2 weeks ago

Apply

5.0 - 9.0 years

18 - 22 Lacs

Bengaluru

Work from Office

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models and data workflows. Collaborate with data science teams to productionize models using tools like MLflow, Kubeflow, or SageMaker. Automate training, validation, testing, and deployment of machine learning models. Monitor model performance, drift, and retraining needs. Ensure version control of datasets, code, and model artifacts. Implement model governance, audit trails, and reproducibility. Optimize model serving infrastructure (REST APIs, batch/streaming inference). Integrate ML solutions with cloud services (AWS, Azure, GCP). Ensure security, compliance, and reliability of ML systems. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field. 4+ years of experience in MLOps, DevOps, or ML engineering roles. Strong experience with ML pipeline tools (MLflow, Kubeflow, TFX, SageMaker Pipelines). Proficiency in containerization and orchestration tools (Docker, Kubernetes, Airflow). Strong Python coding skills and familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch). Experience with cloud platforms (AWS, Azure, GCP) and their ML services. Knowledge of CI/CD tools (GitLab CI/CD, Jenkins, GitHub Actions). Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Sentry). Understanding of data versioning (DVC, LakeFS) and feature stores (Feast, Tecton). Strong grasp of model testing, validation, and monitoring in production environments. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements "Statistical Modeling: Develop and implement core statistical models, including linear and logistic regression, decision trees, and various classification algorithms. Analyze and interpret model outputs to inform business decisions. Advanced NLP: Work on complex NLP tasks, including data cleansing, text preprocessing, and feature engineering. Develop models for text classification, sentiment analysis, and entity recognition. LLM Integration: Design and optimize pipelines for integrating Large Language Models (LLMs) into applications, with a focus on Retrieval-Augmented Generation (RAG) systems. Work on fine-tuning LLMs to enhance their performance on domain-specific tasks. ETL Processes: Design ETL (Extract, Transform, Load) processes to ensure that data is accurately extracted from various sources, transformed into usable formats, and loaded into data warehouses or databases for analysis. BI Reporting and SQL: Collaborate with BI teams to ensure that data pipelines support efficient reporting. Write complex SQL queries to extract, analyze, and visualize data for business intelligence reports. Ensure that data models are optimized for reporting and analytics. Data Storage and Management: Collaborate with data engineers to design and implement efficient storage solutions for structured datasets and semi structured text datasets. Ensure that data is accessible, well-organized, and optimized for retrieval. Model Evaluation and Optimization: Regularly evaluate models using appropriate metrics and improve them through hyperparameter tuning, feature selection, and other optimization techniques. Deploy models in production environments and monitor their performance. Collaboration: Work closely with cross-functional teams, including software engineers, data engineers, and product managers, to integrate models into applications and ensure they meet business requirements. Innovation: Stay updated with the latest advancements in machine learning, NLP, and data engineering. Experiment with new algorithms, tools, and frameworks to continuously enhance the capabilities of our models and data processes." Qualifications Overall Experience: 5+ years of overall experience working in a modern software engineering environment with exposure to best practices in code management, devops and cloud data/ML engineering. Proven track record of developing and deploying machine learning models in production. ML Experience: 3+ years of experience in machine learning engineering, data science with a focus on fundamental statistical modeling. Experience in feature engineering, basic model tuning and understanding model drift over time. Strong foundations in statistics for applied ML. Data Experience: 1+ year(s) in building data engineering ETL processes, and BI reporting. NLP Experience: 1+ year(s) of experience working on NLP use cases, including large scale text data processing, storage and fundamental NLP models for text classification, topic modeling and/or more recently, LLM models and their applications Core Technical Skills: Proficiency in Python and relevant ML and/or NLP specific libraries. Strong SQL skills for data querying, analysis, and BI reporting. Experience with ETL tools and data pipeline management. BI Reporting: Experience in designing and optimizing data models for BI reporting, using tools like Tableau, Power BI, or similar. Education: Bachelor’s or Master’s degree in Computer Science / Data Science,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Apexon brings together distinct core competencies – in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML models and data workflows. Collaborate with data science teams to productionize models using tools like MLflow, Kubeflow, or SageMaker. Automate training, validation, testing, and deployment of machine learning models. Monitor model performance, drift, and retraining needs. Ensure version control of datasets, code, and model artifacts. Implement model governance, audit trails, and reproducibility. Optimize model serving infrastructure (REST APIs, batch/streaming inference). Integrate ML solutions with cloud services (AWS, Azure, GCP). Ensure security, compliance, and reliability of ML systems. Required Skills and Qualifications: Bachelor’s or master’s degree in computer science, Engineering, Data Science, or related field. 5+ years of experience in MLOps, DevOps, or ML engineering roles. Strong experience with ML pipeline tools (MLflow, Kubeflow, TFX, SageMaker Pipelines). Proficiency in containerization and orchestration tools (Docker, Kubernetes, Airflow). Strong Python coding skills and familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch). Experience with cloud platforms (AWS, Azure, GCP) and their ML services. Knowledge of CI/CD tools (GitLab CI/CD, Jenkins, GitHub Actions). Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, Sentry). Understanding of data versioning (DVC, LakeFS) and feature stores (Feast, Tecton). Strong grasp of model testing, validation, and monitoring in production environments. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 o Term Insurance and Accident Insurance o Paid Holidays & Earned Leaves o Paid Parental LeaveoLearning & Career Development o Employee Wellness

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

🚀 We’re Hiring | Data Scientist (Generative AI) – Lead 📍 Location: Nagpur 💼 Experience: 4–6 Years 🧠 Domain: Generative AI, MLOps, Predictive Modeling 📊 Mode: Full-Time | Onsite/Hybrid (based on discussion) Are you a data innovator ready to lead the next wave of AI solutions? We’re looking for a hands-on Data Science Lead to drive real-world impact through generative AI , advanced modeling, and end-to-end ML project execution. As a team lead, you’ll not only build intelligent systems but also mentor peers, guide product direction, and introduce powerful generative-AI tools like LLM-based chat assistants and RAG search pipelines . 🔍 What You’ll Do: • Build and deploy predictive, forecasting & generative AI models • Design experiments and analyze results to guide business strategy • Own ML pipelines, from raw data to monitored production models • Lead initiatives like LLM chatbots, custom embeddings & retrieval-augmented generation • Package & deploy using MLOps practices in AWS/Azure/GCP • Monitor model drift, accuracy, latency, and cost • Mentor junior team members & drive best coding practices • Communicate insights clearly to technical and non-technical teams • Ensure compliance with data governance & privacy standards • Evaluate new tools, run PoCs & keep your stack cutting-edge 🎯 Must-Have Skills: ✔️ Strong Python, SQL & stats foundation ✔️ Experience in shipping ML models from notebook to production ✔️ Cloud & container deployment experience (AWS, Azure, or GCP) ✔️ Practical experience in GenAI – prompt engineering, fine-tuning, or RAG pipelines ✔️ Leadership qualities & strong stakeholder communication 📩 Ready to lead with AI? Send your CV to: ahatesham@primeconsulting-inc.com Or DM me directly to know more!

Posted 2 weeks ago

Apply

3.0 years

9 - 12 Lacs

Goa

On-site

Key Responsibilities ● Design, build, and maintain scalable infrastructure for training and deploying machine learning models at scale. ● Operationalize ML models, including the "TruValue UAE" AVM and the property recommendation engine, by creating robust, low-latency APIs for production use. ● Develop and manage data pipelines (ETL) to feed our machine learning models with clean, reliable data for both training and real-time inference. ● Implement and manage the MLOps lifecycle, including CI/CD for models, versioning, monitoring for model drift, and automated retraining. ● Optimize the performance of machine learning models for speed and cost-efficiency in a cloud environment. ● Collaborate with backend engineers to seamlessly integrate ML services with the core platform architecture. ● Work with data scientists to understand model requirements and provide engineering expertise to improve model efficacy and feasibility. ● Build the technical backend for the AI-powered chatbot, integrating it with NLP services and the core platform data. Required Skills and Experience ● 3-5+ years of experience in a Software Engineering, Machine Learning Engineering, or related role. ● A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. ● Strong software engineering fundamentals with expert proficiency in Python. ● Proven experience deploying machine learning models into a production environment on a major cloud platform (AWS, Google Cloud, or Azure). ● Hands-on experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. ● Experience building and managing data pipelines using tools like Apache Airflow, Kubeflow Pipelines, or cloud-native solutions. ● Collaborate with cross-functional teams to integrate AI solutions into products. ● Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker) and orchestration (Kubernetes). Job Types: Full-time, Permanent Pay: ₹900,000.00 - ₹1,200,000.00 per year Benefits: Paid sick time Provident Fund Schedule: Day shift Fixed shift Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

2 - 6 Lacs

Hyderābād

On-site

Overview: As a key member of the team, you will be responsible for building and maintaining the infrastructure, tools, and workflows that enable the efficient, reliable, and secure deployment of LLMs in production environments. You will collaborate closely with data scientists, Data Engineers and product teams to ensure seamless integration of AI capabilities into our core systems. Responsibilities: Design and implement scalable model deployment pipelines for LLMs, ensuring high availability and low latency. Build and maintain CI/CD workflows for model training, evaluation, and release. Monitor and optimize model performance, drift, and resource utilization in production. Manage cloud infrastructure (e.g., AWS, GCP, Azure) and container orchestration (e.g., Kubernetes, Docker) for AI workloads. Implement observability tools to track system health, token usage, and user feedback loops. Ensure security, compliance, and governance of AI systems, including access control and audit logging. Collaborate with cross-functional teams to align infrastructure with product goals and user needs. Stay current with the latest in MLOps and GenAI tooling and drive continuous improvement in deployment practices. Define and evolve the architecture for GenAI systems, ensuring alignment with business goals and scalability requirements Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or a related technical field. 5 to 7 years of experience in software engineering, DevOps, and 3+ years in machine learning infrastructure roles. Hands-on experience deploying and maintaining machine learning models in production, ideally including LLMs or other deep learning models. Proven experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Strong programming skills in Python, with experience in ML libraries (e.g., TensorFlow, PyTorch, Hugging Face). Proficiency in CI/CD pipelines for ML workflows Experience with MLOps tools: MLflow, Kubeflow, DVC, Airflow, Weights & Biases. Knowledge of monitoring and observability tools

Posted 2 weeks ago

Apply

5.0 years

4 - 9 Lacs

Noida

On-site

Posted On: 14 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data quality in production. Collaborate with data scientists to improve model robustness and deployment readiness. Ensure CI/CD practices for ML models using tools like Jenkins, GitHub Actions, or GitLab CI. Optimize compute resources and manage model versioning, reproducibility, and rollback strategies. Work with cloud platforms AWS and containerization tools like Kubernetes (AKS). Ensure compliance with data privacy and security standards (e.g., GDPR, HIPAA). Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps, Data Engineering, or ML Engineering roles. Strong programming skills in Python; familiarity with R, Scala, or Java is a plus. Experience with automating ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or XGBoost. Experience with ML model monitoring and alerting frameworks (e.g., Evidently, Prometheus, Grafana). Familiarity with data orchestration and ETL/ELT tools (Airflow, dbt, Prefect). Preferred Qualifications: Experience with large-scale data systems (Spark, Hadoop). Knowledge of feature stores (Feast, Tecton). Experience with streaming data (Kafka, Flink). Experience working in regulated environments (finance, healthcare, etc.). Certifications in cloud platforms or ML tools. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and collaboration with cross-functional teams. Adaptable and eager to learn new technologies. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Data Science and Machine Learning - Data Science and Machine Learning - Gen AI (LLM, Agentic AI, Gen AI enable tools like Github Copilot) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Other Programming Language - Scala Big Data - Big Data - Hadoop Big Data - Big Data - SPARK Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Greater Bengaluru Area

On-site

Key Responsibilities: Manage a QA team of 6 engineers focused on validating data pipelines, APIs, and front-end applications. Define, implement, and maintain test automation strategies for: ETL workflows (Airflow DAGs) API contracts, performance, and data sync UI automation for internal and external portals Collaborate with Data Engineering and Product teams to ensure accurate data ingestion from third-party systems such as Workday, ADP, Greenhouse, Lever, etc. Build and maintain robust automated regression suites for API and UI layers using industry-standard tools. Implement data validation checks, including row-level comparisons, schema evolution testing, null/missing value checks, and referential integrity checks. Own and evolve CI/CD quality gates and integrate automated tests into GitHub Actions, Jenkins, or equivalent. Ensure test environments are reproducible, version-controlled, and equipped for parallel test execution. Mentor QA team members in advanced scripting, debugging, and root-cause analysis practices. Develop monitoring/alerting frameworks for data freshness, job failures, and drift detection using Airflow and observability tools. Technical Skills: Core QA & Automation: Strong hands-on experience with Selenium, Playwright, or Cypress for UI automation. Deep expertise in API testing using Postman, REST-assured, Karate, or similar frameworks. Familiar with contract testing using Pact or similar tools. Strong understanding of BDD/TDD frameworks (e.g., Pytest-BDD, Cucumber). ETL / Data Quality: Experience testing ETL pipelines, preferably using Apache Airflow. Hands-on experience with SQL and data validation tools such as: Great Expectations Custom Python data validators Understanding of data modeling, schema versioning, and data lineage. Languages & Scripting: Strong programming/scripting skills in Python (required), with experience using it for test automation and data validations. Familiarity with Bash, YAML, and JSON for pipeline/test configurations. DevOps & CI/CD: Experience integrating tests into pipelines using tools like GitHub Actions, Jenkins, CircleCI, or GitLab CI. Familiarity with containerized environments using Docker and possibly Kubernetes. Monitoring & Observability: Working knowledge of log aggregation and monitoring tools like Datadog, Grafana, Prometheus, or Splunk. Experience with Airflow monitoring, job-level metrics, and alerts for test/data failures. Qualifications : 15+ years in QA/QE roles with 3+ years in a leadership or management capacity. Strong foundation in testing data-centric and distributed systems. Proven ability to define and evolve automation strategies in agile environments. Excellent analytical, communication, and organizational skills. Preferred: Experience with data graphs, knowledge graphs, or employee graph modeling. Exposure to cloud platforms (AWS/GCP) and data services (e.g., S3, BigQuery, Redshift). Familiarity with HR tech domain and integration challenges.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies