Home
Jobs

275 Drift Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

About the Role We are hiring an AI/ML Engineer based in India. You will help design, develop, optimize, and deploy a multimodal AI models for eye disease screening using image and tabular/textual data. You will collaborate closely with AI researchers, engineers, and product teams to build and translate cutting-edge models. This is an opportunity to work at the frontier of clinical AI with purpose-driven colleagues and powerful social impact. Job Type: Full-Time, Remote Start Date: Immediate Compensation: 35-50 lakh INR, commensurate with experience Responsibilities Design and implement multimodal deep learning models combining image encoders and language models. Train, fine-tune, and optimize models using annotated eye images and structured clinical data. Implement instruction-tuned outputs for diagnosis, referral decisions, and patient counseling in English, Hindi, and Tamil. Optimize inference performance using quantization, pruning, and model distillation for deployment on smartphones or edge devices. Work with mobile and backend engineers to integrate models with our telemedicine app and cloud-based infrastructure. Contribute to model evaluation across clinical sites using real-world patient data to measure accuracy, bias, and latency. Support development of responsible AI pipelines: privacy, bias mitigation, versioning, and drift detection. Qualifications Must-Have: Bachelor’s or Master’s degree in Computer Science, AI, Biomedical Engineering, or related field. 4+ years of experience with deep learning framework (Master’s degree can substitute for experience). Hands-on experience training or fine-tuning transformer models (e.g., LLaMA, T5, GPT). Hands-on experience working with vision models (CNNs, ViTs) Fluency with version control (Git), collaborative workflows, and cloud-based development. Passion for using AI in global health or social impact domains. Preferred: Experience with multimodal fusion techniques (cross-attention, late fusion, MLP). Experience implementing or optimizing Retrieval-Augmented Generation (RAG) pipelines for domain-specific applications (e.g., medical QA, knowledge-grounded generation). Experience working with multilingual NLP/NLG (especially Hindi and Tamil). Prior work with model optimization for edge deployment using ONNX, Core ML, TensorFlow Lite, or quantized PyTorch models, or knowledge of hybrid cloud/on-device design. Proven ability to mentor junior engineers and foster a culture of technical growth and collaboration. Strong written and verbal communication skills with cross-functional stakeholders (e.g., product, clinical, design) Why Join Us Work on a mission-driven project with real clinical impact across underserved communities in India. Flexibility to work remotely while contributing to a globally recognized AI/health project. ( Note : Must be available for 2–3 regularly scheduled Zoom meetings per week and one in-person week per year.) Join a startup-minded team with stable long-term partnerships and funding. Apply here: https://www.visilant.org/careers/aiml-engineer-india-job-post Note: Applications through LinkedIn will not be reviewed. About Visilant: Visilant is a digital health social enterprise spun out of Johns Hopkins University. Visilant builds smartphone-based imaging, telemedicine, and artificial intelligence to empower non-eye care specialists to screen patients leading causes of blindness. Visilant has already screened over 30,000 patients in partnership with the largest eye care systems in India Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: MLOps Engineer Location: [Insert Location – e.g., Gurugram / Remote / On-site] Experience: 2–5 years Type: Full-Time Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for seamless deployment and monitoring of ML models. Implement and manage CI/CD workflows using modern tools (e.g., GitHub Actions, Azure DevOps, Jenkins). Orchestrate ML services using Kubernetes for scalable and reliable deployments. Develop and maintain FastAPI-based microservices to serve machine learning models via RESTful APIs. Collaborate with data scientists and ML engineers to productionize models in Azure and AWS cloud environments. Automate infrastructure provisioning and configuration using Infrastructure-as-Code (IaC) tools. Ensure observability, logging, monitoring, and model drift detection in deployed solutions. Required Skills: Strong proficiency in Kubernetes for container orchestration. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or Azure DevOps. Hands-on experience with FastAPI for developing ML-serving APIs. Proficient in deploying ML workflows on Azure and AWS . Knowledge of containerization (Docker optional, if used during local development). Familiarity with model versioning, reproducibility, and experiment tracking tools (e.g., MLflow, DVC). Strong scripting skills (Python, Bash). Preferred Qualifications: B.Tech/M.Tech in Computer Science, Data Engineering, or related fields. Experience with Terraform, Helm, or other IaC tools. Understanding of DevOps practices and security in ML workflows. Good communication skills and a collaborative mindset. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Staff AI Engineer - MLOps Company: Rapid7 Team: AI Center of Excellence Team Overview: Cross-functional team of Data Scientists and AI Engineers Mission: Leverage AI/ML to protect customer attack surfaces Partners with Detection and Response teams, including MDR Encourages creativity, collaboration, and research publication Uses 20+ years of threat analysis and growing patent portfolio Tech Stack: Cloud/Infra: AWS (SageMaker, Bedrock), EKS, Terraform Languages/Tools: Python, Jupyter, NumPy, Pandas, Scikit-learn ML Focus: Anomaly detection, unlabeled data Role Summary: Build and deploy ML production systems Manage end-to-end data pipelines and ensure data quality Implement ML guardrails and robust monitoring Deploy web apps and REST APIs with strong data security Share knowledge, mentor engineers, collaborate cross-functionally Embrace agile, iterative development Requirements: 8–12 years in Software Engineering (3+ in ML deployment on AWS) Strong in Python, Flask/FastAPI, API development Skilled in CI/CD, Docker, Kubernetes, MLOps, cloud AI tools Experience in data pre-processing, feature engineering, model monitoring Strong communication and documentation skills Collaborative mindset, growth-oriented problem-solving Preferred Qualifications: Experience with Java Background in the security industry Familiarity with AI/ML model operations, LLM experimentation Knowledge of model risk management (drift monitoring, hyperparameter tuning, registries) About Rapid7: Rapid7 is committed to securing the digital world through passion, collaboration, and innovation. With over 10,000 customers globally, it offers a dynamic, growth-focused workplace and tackles major cybersecurity challenges with diverse teams and a mission-driven approach. 4o Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

About Mindtickle’s AI/ML Engineering Team Mindtickle is a revenue productivity solution that helps revenue teams enhance their performance by identifying areas for improvement for each team member, recommending appropriate remedial actions, and providing opportunities to implement those recommendations. The charter of the CoE-ML team is to enhance Mindtickle’s solution offerings-such as embedding artificial intelligence in the form of CoPilots, developing hyper-realistic AI-powered role plays, and enabling the automatic curation of collateral while also improving Mindtickle’s internal operations. This includes optimizing workflows, accelerating business processes, and making information more easily discoverable. We work on cutting-edge technologies to drive innovation and deliver advanced AI solutions. We maintain high-quality evaluation standards and continuous improvement practices to ensure our AI features meet stringent performance and reliability criteria. Role Overview As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations. Define clear problem statements and technical requirements by aligning business goals with AI research objectives. Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets. Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows. Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures. Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met. Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team. Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation. Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features. Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics. Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle. Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations. Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors. Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices. Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments. Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes). Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput. Conduct regular performance tuning and cost analysis to maintain operational efficiency. Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support. Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics. Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems. Proven track record of architecting and deploying production AI applications at scale. Strong programming skills in Python and one or more of Java, Go, or C++. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments. Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering. Expertise in CI/CD, automated testing frameworks, and MLOps best practices. Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences. Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar). Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning. Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems. Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana). Publications or patents in AI/ML or related conference presentations. Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML). Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability. What We Offer Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment. Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth. Competitive compensation, comprehensive benefits, and equity options. Flexible work arrangements and support for professional development. Show more Show less

Posted 1 week ago

Apply

5.0 - 6.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Experience: 5-6 years Key Responsibilities Process, analyze, and interpret time-series data from MEMS sensors (e.g., accelerometers, gyroscopes, pressure sensors). Develop and apply statistical methods to identify trends, anomalies, and key performance metrics. Compute and optimize KPIs related to sensor performance, reliability, and drift analysis. Utilize MATLAB toolboxes (e.g., Data Cleaner, Ground Truth Labeler) or Python libraries for data validation, annotation, and anomaly detection. Clean, preprocess, and visualize large datasets to uncover actionable insights. Collaborate with hardware engineers, software developers, and product owners to support end-to-end data workflows. Convert and format data into standardized schemas for use in data pipelines and simulations. Generate automated reports and build dashboards using Power BI or Tableau. Document methodologies, processes, and findings in clear and concise technical reports. Required Qualifications Proficiency in Python or MATLAB for data analysis, visualization, and reporting. Strong foundation in time-series analysis , signal processing, and statistical modeling (e.g., autocorrelation, moving averages, seasonal decomposition). Experience working with MEMS sensors and sensor data acquisition systems. Hands-on experience with pandas, NumPy, SciPy, scikit-learn, and matplotlib . Ability to develop automated KPI reports and interactive dashboards (Power BI or Tableau). Preferred Qualifications Prior experience with data from smartphones, hearables, or wearable devices . Advanced knowledge in MEMS sensor data wrangling techniques. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Exposure to real-time data streaming and processing frameworks/toolboxes. Show more Show less

Posted 1 week ago

Apply

5.0 years

8 - 10 Lacs

Thiruvananthapuram

Remote

5 - 7 Years 1 Opening Kochi, Trivandrum Role description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview: We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities: Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills: 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills: Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity : Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About The Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills And Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Nashik Area

On-site

Linkedin logo

Job Title: AI- LLMOps Engineer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Catalyst team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. Position Overview We are seeking a skilled LLMOps Engineer with expertise in operationalizing Generative AI solutions to join our AI Engineering Center of Excellence. This role will focus on establishing robust infrastructure, deployment pipelines, and monitoring systems to ensure the reliable, secure, and scalable delivery of LLM-based applications in production environments. The LLMOps Engineer will work closely with AI Tech Leads and Senior Engineers to bridge the gap between development and production deployment of GenAI solutions. Primary Responsibilities Design and implement infrastructure and deployment pipelines for large language model (LLM) applications in production environments Establish monitoring, observability, and logging systems for GenAI applications to ensure performance, reliability, and data quality Develop automated testing frameworks specific to LLM applications, including evaluation of model outputs and prompt effectiveness Implement version control systems for models, prompts, and configurations to ensure reproducibility and traceability Create and maintain CI/CD pipelines for seamless deployment of GenAI solutions Optimize infrastructure and implementations for cost efficiency, considering compute resources and API usage Implement security controls and compliance measures specific to GenAI applications Collaborate with development teams to establish best practices for transitioning GenAI solutions from prototype to production Automate feedback loops for continuous improvement of deployed models Document operational procedures, architecture decisions, and maintenance protocols Required Qualifications 5+ years of experience in DevOps, platform engineering, or related roles with at least 2+ years focused on ML/AI systems Hands-on experience with cloud infrastructure and services for AI workloads (AWS, Azure, GCP) Strong programming skills in languages commonly used for infrastructure and automation (Bash, YAML) Experience with containerization and orchestration technologies (Docker, Kubernetes) for AI workloads Knowledge of LLM deployment patterns and associated infrastructure requirements Familiarity with monitoring tools and techniques for AI systems (e.g., model performance, drift detection, cost tracking) Understanding of CI/CD principles and experience implementing automated pipelines Experience with infrastructure-as-code tools (Terraform, CloudFormation, etc.) Basic understanding of LLM architectures and their operational requirements Bachelor's degree in Computer Science, Engineering, or related technical fieldd Preferred Skills Experience deploying and managing production LLM applications at scale Knowledge of vector database operations and optimization for RAG implementations Familiarity with API gateway management and rate limiting strategies Experience with distributed tracing and debugging complex AI systems Understanding of data privacy, security, and compliance considerations for GenAI applications Knowledge of cost optimization techniques for LLM inference and embedding generation Experience with feature flagging and A/B testing frameworks for AI applications Familiarity with LLM evaluation metrics and automated testing approaches Experience with GPU resource management and optimization Success Factors Strong technical curiosity and willingness to explore new GenAI capabilities Balance between operational excellence and enabling rapid innovation Strong problem-solving skills for troubleshooting complex production issues Effective communication across technical and non-technical stakeholders Proactive approach to identifying and mitigating operational risks Ability to translate business requirements into operational specifications Commitment to continuous improvement of operational processes Adaptability to rapidly evolving GenAI technologies and deployment patterns Location: IND Work-at-Home Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents R1598486 Show more Show less

Posted 1 week ago

Apply

5.0 years

3 - 5 Lacs

Hyderābād

On-site

JLL supports the Whole You, personally and professionally. Our people at JLL are shaping the future of real estate for a better world by combining world class services, advisory and technology to our clients. We are committed to hiring the best, most talented people in our industry; and we support them through professional growth, flexibility, and personalized benefits to manage life in and outside of work. Whether you’ve got deep experience in commercial real estate, skilled trades, and technology, or you’re looking to apply your relevant experience to a new industry, we empower you to shape a brighter way forward so you can thrive professionally and personally. The BMS Engineer is responsible for implementing and maintaining Building Management Systems that control and monitor various building functions such as HVAC, lighting, security, and energy management. This role requires a blend of technical expertise, problem-solving skills, and the ability to work with diverse stakeholders. Required Qualifications and skills: Diploma/Bachelor's degree in Electrical / Mechanical Engineering or related field 5+ years of experience in BMS Operations, Design implementation, and maintenance Proficiency in BMS software platforms (e.g. Schneider Electric, Siemens, Johnson Controls) Strong understanding of HVAC systems and building operations Knowledge of networking protocols (e.g. BACnet, Modbus, LonWorks) Familiarity with energy management principles and sustainability practices Excellent problem-solving and analytical skills Strong communication and interpersonal abilities Ability to work independently and as part of a team Preferred Qualifications: Professional engineering license (P.E.) or relevant industry certifications Experience with integration of IoT devices and cloud-based systems Knowledge of building codes and energy efficiency standards Project management experience Programming skills (e.g., Python, C++, Java) Roles and Responsibilities of BMS Engineer 1. Troubleshoot and resolve issues with BMS 2. Optimize building performance and energy efficiency through BMS tuning 3. Check LL BMS critical parameters & communicate with LL in case parameters go beyond operating threshold 4. Develop and maintain system documentation and operational procedures. Monitor BMS OEM PPM schedule & ensure diligent execution. Monitor SLAs & inform WTSMs in the event of breach. 5. Ensure real time monitoring of Hot / Cold Prism Tickets & resolve on priority. 6. Preparation of Daily / Weekly & Monthly reports comprising of Uptime / Consumption with break up / Temperature trends / Alarms & equipment MTBF 7. Ensure adherence to Incident escalation process & training to Ground staff. 8. Coordination with BMS OEM for ongoing operational issues (Graphics modification/ sensor calibration / controller configuration / Hardware replacement) 9. Supporting annual power down by gracefully shutting down the system & bringing up post completion of the activity. 10. Ensure healthiness of FLS (Panels / Smoke Detectors) & conduct periodic check for drift levels. 11. Provide technical support and training to facility management team 12. Collaborate with other engineering disciplines, WPX Team and project stakeholders and make changes to building environment if so needed. If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements below. We’re interested in getting to know you and what you bring to the table! Personalized benefits that support personal well-being and growth: JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. About JLL – We’re JLL—a leading professional services and investment management firm specializing in real estate. We have operations in over 80 countries and a workforce of over 102,000 individuals around the world who help real estate owners, occupiers and investors achieve their business ambitions. As a global Fortune 500 company, we also have an inherent responsibility to drive sustainability and corporate social responsibility. That’s why we’re committed to our purpose to shape the future of real estate for a better world. We’re using the most advanced technology to create rewarding opportunities, amazing spaces and sustainable real estate solutions for our clients, our people, and our communities. Our core values of teamwork, ethics and excellence are also fundamental to everything we do and we’re honored to be recognized with awards for our success by organizations both globally and locally. Creating a diverse and inclusive culture where we all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we’re headed in the future. And we know that unique backgrounds, experiences and perspectives help us think bigger, spark innovation and succeed together.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role: Devops/ Site Reliability Engineer Duration: 6 Months (Possible Extension) Location: Pune(Onsite) Timings: Full Time (As per company timings) IST Notice Period: (Immediate Joiner - Only) Experience: 5+ Years About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience 5–10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking — understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills — capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Role Description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Staff AI Engineer MLOps About The Team The AI Center of Excellence team includes Data Scientists and AI Engineers that work together to conduct research, build prototypes, design features and build production AI components and systems. Our mission is to leverage the best available technology to protect our customers' attack surfaces. We partner closely with Detection and Response teams, including our MDR service, to leverage AI/ML for enhanced customer security and threat detection. We operate with a creative, iterative approach, building on 20+ years of threat analysis and a growing patent portfolio. We foster a collaborative environment, sharing knowledge, developing internal learning, and encouraging research publication. If you’re passionate about AI and want to make a major impact in a fast-paced, innovative environment, this is your opportunity. The Technologies We Use Include AWS for hosting our research environments, data, and features (i.e. Sagemaker, Bedrock) EKS to deploy applications Terraform to manage infrastructure Python for analysis and modeling, taking advantage of numpy and pandas for data wrangling. Jupyter notebooks (locally and remotely hosted) as a computational environment Sci-kit learn for building machine learning models Anomaly detection methods to make sense of unlabeled data About The Role Rapid7 is seeking a Staff AI Engineer to join our team as we expand and evolve our growing AI and MLOps efforts. You should have a strong foundation in software engineering, and MLOps and DevOps systems and tools. Further, you’ll have a demonstrated track record of taking models created in the AI R&D process to production with repeatable deployment, monitoring and observability patterns. In this intersectional role, you will combine your expertise in AI/ML deployments, cloud systems and software engineering to enhance our product offerings and streamline our platform's functionalities. In This Role, You Will Design and build ML production systems, including project scoping, data requirements, modeling strategies, and deployment Develop and maintain data pipelines, manage the data lifecycle, and ensure data quality and consistency throughout Assure robust implementation of ML guardrails and manage all aspects of service monitoring Develop and deploy accessible endpoints, including web applications and REST APIs, while maintaining steadfast data privacy and adherence to security best practices and regulations Share expertise and knowledge consistently with internal and external stakeholders, nurturing a collaborative environment and fostering the development of junior engineers Embrace agile development practices, valuing constant iteration, improvement, and effective problem-solving in complex and ambiguous scenarios The Skills You’ll Bring Include 8-12 years experience as a Software Engineer, with at least 3 years focused on gaining expertise in ML deployment (especially in AWS) Solid technical experience in the following is required: Software engineering: developing APIs with Flask or FastAPI, paired with strong Python knowledge DevOps and MLOps: Designing and integrating scalable AI/ML systems into production environments, CI/CD tooling, Docker, Kubernetes, cloud AI resource utilization and management Pipelines, monitoring, and observability: Data pre-processing and feature engineering, model monitoring and evaluation A growth mindset - welcoming the challenge of tackling complex problems with a bias for action Strong written and verbal communication skills - able to effectively communicate technical concepts to diverse audiences and creating clear documentation of system architectures and implementation details Proven ability to collaborate effectively across engineering, data science, product, and other teams to drive successful MLOps initiatives and ensure alignment on goals and deliverables. Experience With The Following Would Be Advantageous Experience with Java programming Experience in the security industry AI and ML models, understanding their operational frameworks and limitations Familiarity with resources that enable data scientists to fine tune and experiment with LLMs Knowledge of or experience with model risk management strategies, including model registries, concept/covariate drift monitoring, and hyperparameter tuning We know that the best ideas and solutions come from multi-dimensional teams. That’s because these teams reflect a variety of backgrounds and professional experiences. If you are excited about this role and feel your experience can make an impact, please don’t be shy - apply today. About Rapid7 At Rapid7, we are on a mission to create a secure digital world for our customers, our industry, and our communities. We do this by embracing tenacity, passion, and collaboration to challenge what’s possible and drive extraordinary impact. Here, we’re building a dynamic workplace where everyone can have the career experience of a lifetime. We challenge ourselves to grow to our full potential. We learn from our missteps and celebrate our victories. We come to work every day to push boundaries in cybersecurity and keep our 10,000 global customers ahead of whatever’s next. Join us and bring your unique experiences and perspectives to tackle some of the world’s biggest security challenges. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

About the role: We’re hiring an execution first inbound marketer who lives in the data layer, and thinks like a growth hacker. If you get a kick out of turning raw inputs into performance pipelines using HubSpot, Salesforce, automation scripts, and creative campaigns, this role is built for you. You’ll work directly under the Inbound Marketing Director and will own end-to-end delivery across digital ads, SEO, SEM, marketing ops, campaign automation, content distribution, and event execution. Responsibilities: 1. SEO, Paid Media & Web Analytics: Execute and optimize SEO initiatives using SEMRush, Google Search Console, and Google Analytics. Manage paid campaigns (primarily LinkedIn) in coordination with agency partners: own ad creatives, copy, and weekly reporting. Monitor SEO health, own backlink sprint,manage keyword-to-content alignment. 2. Email Marketing & Campaign Execution: Segment lists and deploy nurture streams based on product-market clusters. Draft and QA emails for announcements, press releases, and en-masse campaigns. Own daily/weekly email performance dashboards in Sheets + HubSpot. 3. Events & Engagement Programs: Coordinate speaker outreach, guest targeting, and content logistics for CFO roundtables and micro-events. Support post-event workflows in HubSpot (tagging, follow-up, recycling leads into nurture). 4. Marketing Automation & CRM Ops: Manage HubSpot as the source of truth for marketing automation (forms, workflows, nurture streams, contact properties). Support Salesforce campaign and lead tracking workflows in sync with sales/BDR efforts. Build automations via Google Scripts to bridge tools, clean data, and trigger workflows across Sheets, HubSpot, and SFDC. 5. Presentation & Creative Aesthetics: Build internal and external-facing slides for events, reviews, and campaign pitches. Maintain brand consistency and high visual polish across decks and outbound collateral. Requirements: 2–5 years of experience in inbound or performance marketing for B2B/SaaS companies. Experienced in tools: SEMRush, Google Analytics, Google Search Console, LinkedIn Ads, chatbots (Qualified or Drift). Hands-on with HubSpot (automation, forms, emails) and Salesforce (leads, campaigns, reporting). Strong skills in Google Sheets, Excel (formulas, pivot tables, macros). Aesthetic sense in creating slide decks using Google Slides or PowerPoint. Obsessed with clean data, dashboards, and campaign ROI. Comfortable wearing multiple hats (from ops to creative). Familiarity with chatbot flows and conversational marketing logic. Previous collaboration with SDRs/BDRs to generate MQLs and SQLs. Benefits: Well-funded and proven startup with large ambitions and competitive salaries. Entrepreneurial culture where pushing limits, creating and collaborating is everyday business. Open communication with management and company leadership. Small, dynamic teams = massive impact. Simetrik considers qualified applicants for employment without regard to race, gender, age, color, religion, national origin, marital status, disability, sexual orientation, gender identity/expression, protected military/veteran status, or any other legally protected factor. I authorize Simetrik to be the data controller and, as such, it may collect, store and use for the purposes of my possible hiring, under the conditions described in this document. I also give my consent to Simetrik to treat my personal data information in accordance with the Personal Data Treatment Policy available at https://simetrik.com/, which was made known to me before collecting my personal data. Join a team of incredibly talented people that build things, are free to create, and love collaborating! Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of Weekday's clients Min Experience: 8 years Location: Bengaluru JobType: full-time Requirements As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations Define clear problem statements and technical requirements by aligning business goals with AI research objectives Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes) Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput Conduct regular performance tuning and cost analysis to maintain operational efficiency Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals Required Qualification Bachelor's or Master's degree in Computer Science, Engineering, or a related field 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems Proven track record of architecting and deploying production AI applications at scale Strong programming skills in Python and one or more of Java, Go, or C++ Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering Expertise in CI/CD, automated testing frameworks, and MLOps best practices Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar) Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana) Publications or patents in AI/ML or related conference presentations Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML) Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability What we offer? Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth Competitive compensation, comprehensive benefits, and equity options Flexible work arrangements and support for professional development Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Solutions Architect / Technical Lead - AI & Automation1 Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services, Google OCR, and Azure OCR into client workflows. AI/ML Engineering Develop and optimize vision-based AI models (Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT) using Python. Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering: LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer. Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation. Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert, MongoDB Certified Developer, or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM, Coupa, or SAP Ariba integrations. Familiarity with Kubernetes, Docker, and MLOps practices. Show more Show less

Posted 1 week ago

Apply

10.0 years

2 - 7 Lacs

Hyderābād

On-site

Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services , Google OCR , and Azure OCR into client workflows. AI/ML Engineering: Develop and optimize vision-based AI models ( Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT ) using Python . Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management: Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership: Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement: Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering : LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer . Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation . Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert , MongoDB Certified Developer , or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM , Coupa , or SAP Ariba integrations. Familiarity with Kubernetes , Docker , and MLOps practices.

Posted 1 week ago

Apply

Exploring Drift Jobs in India

The drift job market in India is rapidly growing, with an increasing demand for professionals skilled in this area. Drift professionals are sought after by companies looking to enhance their customer service and engagement through conversational marketing.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

Average Salary Range

The average salary range for drift professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 10 lakhs per annum.

Career Path

A typical career path in the drift domain may progress from roles such as Junior Drift Specialist or Drift Consultant to Senior Drift Specialist, Drift Manager, and eventually reaching the position of Drift Director or Head of Drift Operations.

Related Skills

In addition to expertise in drift, professionals in this field are often expected to have skills in customer service, marketing automation, chatbot development, and data analytics.

Interview Questions

  • What is conversational marketing? (basic)
  • How would you handle a customer complaint through a drift chatbot? (medium)
  • Can you explain a scenario where you successfully implemented drift for a client? (medium)
  • What are some common challenges faced in drift implementation and how do you overcome them? (advanced)
  • How do you measure the success of a drift campaign? (medium)
  • Explain the importance of personalization in drift marketing. (medium)
  • How do you ensure compliance with data privacy regulations when using drift? (advanced)
  • What strategies would you implement to increase customer engagement through drift? (medium)
  • Can you provide examples of drift integrations with other marketing tools? (advanced)
  • How do you stay updated on the latest trends and developments in drift technology? (basic)
  • Describe a situation where you had to troubleshoot a technical issue in a drift chatbot. (medium)
  • How do you handle leads generated through drift to ensure conversion? (medium)
  • What are some best practices for setting up drift playbooks? (medium)
  • How do you customize drift for different target audiences? (medium)
  • Explain the difference between drift and traditional marketing methods. (basic)
  • Can you give an example of a successful drift campaign you were involved in? (medium)
  • How do you ensure a seamless transition between drift and human agents in customer interactions? (medium)
  • What metrics do you track to measure the effectiveness of a drift chatbot? (medium)
  • How do you handle negative feedback received through drift interactions? (medium)
  • What are the key components of a successful drift strategy? (medium)
  • How do you handle a high volume of customer inquiries through drift? (medium)
  • Explain the role of AI in drift marketing. (medium)
  • How do you ensure that drift chatbots are providing accurate information to customers? (medium)
  • Describe a situation where you had to customize drift to meet specific client requirements. (advanced)

Closing Remark

As you prepare for a career in drift jobs in India, remember to showcase your expertise, experience, and passion for conversational marketing. Stay updated on industry trends and technologies to stand out in the competitive job market. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies