Jobs
Interviews

677 Drift Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Role: Devops/ Site Reliability Engineer Duration: 6 Months (Possible Extension) Location: Pune(Onsite) Timings: Full Time (As per company timings) IST Notice Period: (Immediate Joiner - Only) Experience: 5+ Years About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience 5–10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking — understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills — capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Role Description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS Show more Show less

Posted 1 month ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Staff AI Engineer MLOps About The Team The AI Center of Excellence team includes Data Scientists and AI Engineers that work together to conduct research, build prototypes, design features and build production AI components and systems. Our mission is to leverage the best available technology to protect our customers' attack surfaces. We partner closely with Detection and Response teams, including our MDR service, to leverage AI/ML for enhanced customer security and threat detection. We operate with a creative, iterative approach, building on 20+ years of threat analysis and a growing patent portfolio. We foster a collaborative environment, sharing knowledge, developing internal learning, and encouraging research publication. If you’re passionate about AI and want to make a major impact in a fast-paced, innovative environment, this is your opportunity. The Technologies We Use Include AWS for hosting our research environments, data, and features (i.e. Sagemaker, Bedrock) EKS to deploy applications Terraform to manage infrastructure Python for analysis and modeling, taking advantage of numpy and pandas for data wrangling. Jupyter notebooks (locally and remotely hosted) as a computational environment Sci-kit learn for building machine learning models Anomaly detection methods to make sense of unlabeled data About The Role Rapid7 is seeking a Staff AI Engineer to join our team as we expand and evolve our growing AI and MLOps efforts. You should have a strong foundation in software engineering, and MLOps and DevOps systems and tools. Further, you’ll have a demonstrated track record of taking models created in the AI R&D process to production with repeatable deployment, monitoring and observability patterns. In this intersectional role, you will combine your expertise in AI/ML deployments, cloud systems and software engineering to enhance our product offerings and streamline our platform's functionalities. In This Role, You Will Design and build ML production systems, including project scoping, data requirements, modeling strategies, and deployment Develop and maintain data pipelines, manage the data lifecycle, and ensure data quality and consistency throughout Assure robust implementation of ML guardrails and manage all aspects of service monitoring Develop and deploy accessible endpoints, including web applications and REST APIs, while maintaining steadfast data privacy and adherence to security best practices and regulations Share expertise and knowledge consistently with internal and external stakeholders, nurturing a collaborative environment and fostering the development of junior engineers Embrace agile development practices, valuing constant iteration, improvement, and effective problem-solving in complex and ambiguous scenarios The Skills You’ll Bring Include 8-12 years experience as a Software Engineer, with at least 3 years focused on gaining expertise in ML deployment (especially in AWS) Solid technical experience in the following is required: Software engineering: developing APIs with Flask or FastAPI, paired with strong Python knowledge DevOps and MLOps: Designing and integrating scalable AI/ML systems into production environments, CI/CD tooling, Docker, Kubernetes, cloud AI resource utilization and management Pipelines, monitoring, and observability: Data pre-processing and feature engineering, model monitoring and evaluation A growth mindset - welcoming the challenge of tackling complex problems with a bias for action Strong written and verbal communication skills - able to effectively communicate technical concepts to diverse audiences and creating clear documentation of system architectures and implementation details Proven ability to collaborate effectively across engineering, data science, product, and other teams to drive successful MLOps initiatives and ensure alignment on goals and deliverables. Experience With The Following Would Be Advantageous Experience with Java programming Experience in the security industry AI and ML models, understanding their operational frameworks and limitations Familiarity with resources that enable data scientists to fine tune and experiment with LLMs Knowledge of or experience with model risk management strategies, including model registries, concept/covariate drift monitoring, and hyperparameter tuning We know that the best ideas and solutions come from multi-dimensional teams. That’s because these teams reflect a variety of backgrounds and professional experiences. If you are excited about this role and feel your experience can make an impact, please don’t be shy - apply today. About Rapid7 At Rapid7, we are on a mission to create a secure digital world for our customers, our industry, and our communities. We do this by embracing tenacity, passion, and collaboration to challenge what’s possible and drive extraordinary impact. Here, we’re building a dynamic workplace where everyone can have the career experience of a lifetime. We challenge ourselves to grow to our full potential. We learn from our missteps and celebrate our victories. We come to work every day to push boundaries in cybersecurity and keep our 10,000 global customers ahead of whatever’s next. Join us and bring your unique experiences and perspectives to tackle some of the world’s biggest security challenges. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

About the role: We’re hiring an execution first inbound marketer who lives in the data layer, and thinks like a growth hacker. If you get a kick out of turning raw inputs into performance pipelines using HubSpot, Salesforce, automation scripts, and creative campaigns, this role is built for you. You’ll work directly under the Inbound Marketing Director and will own end-to-end delivery across digital ads, SEO, SEM, marketing ops, campaign automation, content distribution, and event execution. Responsibilities: 1. SEO, Paid Media & Web Analytics: Execute and optimize SEO initiatives using SEMRush, Google Search Console, and Google Analytics. Manage paid campaigns (primarily LinkedIn) in coordination with agency partners: own ad creatives, copy, and weekly reporting. Monitor SEO health, own backlink sprint,manage keyword-to-content alignment. 2. Email Marketing & Campaign Execution: Segment lists and deploy nurture streams based on product-market clusters. Draft and QA emails for announcements, press releases, and en-masse campaigns. Own daily/weekly email performance dashboards in Sheets + HubSpot. 3. Events & Engagement Programs: Coordinate speaker outreach, guest targeting, and content logistics for CFO roundtables and micro-events. Support post-event workflows in HubSpot (tagging, follow-up, recycling leads into nurture). 4. Marketing Automation & CRM Ops: Manage HubSpot as the source of truth for marketing automation (forms, workflows, nurture streams, contact properties). Support Salesforce campaign and lead tracking workflows in sync with sales/BDR efforts. Build automations via Google Scripts to bridge tools, clean data, and trigger workflows across Sheets, HubSpot, and SFDC. 5. Presentation & Creative Aesthetics: Build internal and external-facing slides for events, reviews, and campaign pitches. Maintain brand consistency and high visual polish across decks and outbound collateral. Requirements: 2–5 years of experience in inbound or performance marketing for B2B/SaaS companies. Experienced in tools: SEMRush, Google Analytics, Google Search Console, LinkedIn Ads, chatbots (Qualified or Drift). Hands-on with HubSpot (automation, forms, emails) and Salesforce (leads, campaigns, reporting). Strong skills in Google Sheets, Excel (formulas, pivot tables, macros). Aesthetic sense in creating slide decks using Google Slides or PowerPoint. Obsessed with clean data, dashboards, and campaign ROI. Comfortable wearing multiple hats (from ops to creative). Familiarity with chatbot flows and conversational marketing logic. Previous collaboration with SDRs/BDRs to generate MQLs and SQLs. Benefits: Well-funded and proven startup with large ambitions and competitive salaries. Entrepreneurial culture where pushing limits, creating and collaborating is everyday business. Open communication with management and company leadership. Small, dynamic teams = massive impact. Simetrik considers qualified applicants for employment without regard to race, gender, age, color, religion, national origin, marital status, disability, sexual orientation, gender identity/expression, protected military/veteran status, or any other legally protected factor. I authorize Simetrik to be the data controller and, as such, it may collect, store and use for the purposes of my possible hiring, under the conditions described in this document. I also give my consent to Simetrik to treat my personal data information in accordance with the Personal Data Treatment Policy available at https://simetrik.com/, which was made known to me before collecting my personal data. Join a team of incredibly talented people that build things, are free to create, and love collaborating! Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Min Experience: 8 years Location: Bengaluru JobType: full-time Requirements As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations Define clear problem statements and technical requirements by aligning business goals with AI research objectives Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes) Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput Conduct regular performance tuning and cost analysis to maintain operational efficiency Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals Required Qualification Bachelor's or Master's degree in Computer Science, Engineering, or a related field 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems Proven track record of architecting and deploying production AI applications at scale Strong programming skills in Python and one or more of Java, Go, or C++ Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering Expertise in CI/CD, automated testing frameworks, and MLOps best practices Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar) Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana) Publications or patents in AI/ML or related conference presentations Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML) Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability What we offer? Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth Competitive compensation, comprehensive benefits, and equity options Flexible work arrangements and support for professional development Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Solutions Architect / Technical Lead - AI & Automation1 Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services, Google OCR, and Azure OCR into client workflows. AI/ML Engineering Develop and optimize vision-based AI models (Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT) using Python. Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering: LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer. Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation. Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert, MongoDB Certified Developer, or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM, Coupa, or SAP Ariba integrations. Familiarity with Kubernetes, Docker, and MLOps practices. Show more Show less

Posted 2 months ago

Apply

10.0 years

2 - 7 Lacs

Hyderābād

On-site

Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services , Google OCR , and Azure OCR into client workflows. AI/ML Engineering: Develop and optimize vision-based AI models ( Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT ) using Python . Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management: Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership: Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement: Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering : LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer . Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation . Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert , MongoDB Certified Developer , or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM , Coupa , or SAP Ariba integrations. Familiarity with Kubernetes , Docker , and MLOps practices.

Posted 2 months ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Hyderābād

On-site

Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture , including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features

Posted 2 months ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Quality Engineer on the Marketing Datahub Squad, you’ll join a team of passionate professionals dedicated to building and supporting BCG’s next-generation data analytics foundation. Your work will enable personalized customer journeys and empower data-driven decisions by ensuring our analytics platform is stable, scalable, and reliable. The incumbent for this role will help improve and champion data quality and integrity throughout the data lake and other external systems. The candidate must be detail oriented, open-minded and interested in continuous learning, while being curious and unafraid to ask questions. He/She must be willing to innovate and initiate change, discover fresh solutions and present innovative ideas while driving towards increased test automation. He/She must work well in a global team environment and collaborate well with peers and stakeholders Champion data quality across our end-to-end pipeline: from various ingestion sources into Snowflake, through various transformations, to downstream analytics and reporting. Performing integration and regression testing to ensure all system components work successfully together Design, execute and automate test plans for various ETL solutions to ensure each batch and streaming job delivers accurate, timely data. Develop and monitor checks via dbt tests and other tools that surface schema drift, record counts mismatches, null anomalies and other integrity issues. Track and manage defects in JIRA, work collaboratively with Product Owner, Analysts and Data Engineers to prioritize and resolve critical data bugs. Maintain test documentation including test strategies, test cases and run-books, ensuring clarity for both technical and business stakeholders. Continuously improve our CI/CD pipelines (GitHub Actions) by integrating data quality gates and enhancing deployment reliability. What You'll Bring Agile SDLC & Testing Life Cycle: proven track record testing in agile environments with distributed teams. Broad testing expertise: hands-on experience in functional, system, integration and regression testing—applied specifically to data/ETL pipelines. Data platform tools: practical experience with Snowflake, dbt and Fivetran for building, transforming and managing analytic datasets. Cloud Technologies: Familiarity with AWS services (Lambda, Glue jobs and other AWS data stack components) and Azure, including provisioning test environments and validating cloud-native data processes. SQL mastery: ability to author and optimize complex queries to validate transformations, detect discrepancies and generate automated checks. Pipeline validation: testing Data Lake flows (ingest/extract), backend API services for data push/pull, and any data access or visualization layers. Defect Management: using JIRA for logging, triaging and reporting on data defects, and Confluence for maintaining test docs and KPIs. Source control & CI/CD: hands-on with Git for branching and code reviews; experience integrating tests into Jenkins or GitHub Actions. Test Planning & Strategy: help define the scope, estimates, development of test plans, test strategies, and test scripts through the iterations to ensure a quality product. Quality Metrics & KPIs: Tracking and presenting KPIs for testing efforts, such as test coverage, gaps, hotfixes, and defect leakages. Automation: Experience writing end-to-end and/or functional integration automated tests using relevant testing automation frameworks Additional info YOU’RE GOOD AT Data-focused testing: crafting and running complex SQL-driven validations, cross-environment comparisons and sample-based checks in complex pipelines. Automation mindset: Identifying and implementing testing automation solutions for regression, monitoring and efficiency purposes. Collaboration: partnering effectively with Data Engineers, Analytics, BI and Product teams to translate requirements into testable scenarios to ensure a quality product. Being a team player, open, pleasure to work with and positive in a group dynamic, ability to work collaboratively in virtual teams and someone who is self-starter and highly proactive. Agile delivery: adapting to fast-moving sprints, contributing to sprint planning, retrospectives and backlog grooming. Proactivity: spotting gaps in coverage, proposing new test frameworks or tools, and driving adoption across the squad. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 months ago

Apply

8.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role Overview We are seeking a QA Manager with strong experience in test automation and a passion for AI . You will lead a team of QA engineers and work closely with cross-functional stakeholders to build robust testing frameworks, introduce intelligent automation, and ensure end-to-end product quality. You’ll also play a key role in shaping how AI can be used to improve QA efficiency and software reliability. Key Responsibilities Own the QA strategy, test planning, and execution for web, mobile, and API-based applications. Lead, mentor, and grow a team of QA engineers and SDETs. Design and implement automation frameworks using modern tools (e.g., Selenium, Cypress, Playwright, Appium). Evaluate and integrate AI-driven QA tools (e.g., Testim, Mabl, Functionize, Diffblue, ChatGPT-based test case generation). Drive continuous integration and delivery (CI/CD) of automated tests across environments. Establish test data strategies using synthetic data generation and AI-based test data tools. Collaborate with product managers, developers, and DevOps teams to define acceptance criteria and promote shift-left testing. Monitor quality metrics and use analytics to improve test coverage, defect detection, and release velocity. Stay abreast of emerging QA trends, especially in AI/ML validation, generative AI testing, and model interpretability QA. Required Skills & Qualifications 8+ years of experience in Quality Assurance, with at least 3 years in a managerial or leadership role. Proven track record in building and scaling test automation for complex systems. Experience with at least one programming language (Python, Java, JavaScript preferred). Hands-on experience with AI-powered QA tools or building AI/ML pipelines with embedded QA. Solid understanding of AI/ML concepts such as model training, inference, data drift, and validation. Strong knowledge of testing practices: unit, integration, functional, performance, regression, security. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, etc.). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerized environments (Docker, Kubernetes). Excellent leadership, communication, and stakeholder management skills. Nice to Have Exposure to MLOps or AI model lifecycle QA. Experience in regulatory or enterprise-level compliance QA (e.g., SOC2, GDPR). Contributions to open-source QA projects or AI QA communities. Show more Show less

Posted 2 months ago

Apply

15.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less

Posted 2 months ago

Apply

100.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General : Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills : Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in relational database (e.g.:- MS SQL Server) & NoSQL Databases (e.g.:- MongoDB) Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Above average verbal, written and presentation skills. Show more Show less

Posted 2 months ago

Apply

100.0 years

0 Lacs

Kochi, Kerala, India

On-site

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Designation: MLOps Engineer Location: Kochi, India Experience: 5-8 years Qualification: B. Tech /MCA /BCA Timings: 10 AM to 7 PM (IST) Work Mode: Hybrid Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General: Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python and Shell Scripts. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in SQL and Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong skills in scripting languages like Python, Bash, or PowerShell. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Required: Proficiency and experience working with Relational Databases and SQL Scripting (MS SQL Server) Above average verbal, written and presentation skills. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role: Lead – Marketing Operations, Mar-Tech and Marketing Analytics Location: Gurugram (In-office, 5 days a week) Working Hours: 12:00 PM – 12:00 AM IST (aligned with EST overlap) Overview Leena AI is redefining how enterprises automate and resolve HR and IT queries through Agentic AI. We're seeking a data-driven, systems-savvy leader to run our Marketing Operations, Mar-Tech Stack and Marketing Analytics functions. This role is instrumental in enabling predictable pipeline generation and optimizing every lever of our GTM engine – from lead generation through lead capture, lead scoring, and lead routing, to lead conversion and insights. Our marketing and sales both run on Hubspot. The ideal candidate is a self starter who brings a rare blend of analytical rigor, systems thinking, and process excellence , and will serve as the operational backbone of a fast-scaling marketing organization. Marketing Operations (MOps) Mission: Build a high-precision GTM engine that scales with speed and accuracy. Responsibilities: Own end-to-end campaign operations: Campaign set up, A/B testing, lead capture (digital), lead upload (events), lead scoring, deduplication, routing, UTM governance, and detailed campaign performance tracking and ongoing optimization Partner with SDR, Sales Ops and RevOps to ensure accurate attribution, pipeline tracking, two-way feedback flows, and lifecycle stage transitions. Build and enforce SLAs across inbound workflows – MQL > SQL > Opportunity > Pipeline. Define and optimize lead scoring and grading models Develop standardized playbooks and QA processes for product launches, product rollouts, and global field initiatives. Set up and maintain campaign taxonomy and hierarchy, lead source taxonomy, program naming conventions and campaign hygiene in HubSpot. Mar-Tech Stack & Automation Mission: Deploy the most efficient, interoperable marketing technology stack in B2B SaaS. Responsibilities: Follow B2B SaaS best practices and layout a Mar-Tech architecture for the company for the coming couple of years. Update the architecture as Mar-Tech technologies and tools keep evolving Build and manage a Mar-Tech roadmap in alignment with growth and sales priorities. Lead rapid, cross-functional efforts to define business needs. Then own selection criteria and scoring, fast selection processes,, integration, and optimization of core platforms: HubSpot, Clearbit, ZoomInfo, Drift, 6sense, Segment, etc. Design and manage scalable workflows for campaign automation, nurture, retargeting, and enrichment. Serve as the technical lead for data syncs, API workflows, and tool interoperability across GTM systems. Conduct regular stack audits for performance, redundancy, and compliance. Lead the process to sunset/downscale technologies that are no longer needed/viable Drive experimentation through A/B tools, landing page builders, and personalization platforms. Marketing Analytics & Insights Mission: Be the single source of truth for go-to-market (GTM)performance and funnel diagnostics. Responsibilities: Connect with the day-to-day realities of our rapidly growing business to define analytics that would inform better business decisions, and get buy-in and ongoing use Define and track KPIs across acquisition, engagement, conversion, and velocity by segment and geo. Build dashboards and reports for channel performance, CAC, MQL-to-Close, funnel conversion, and ROI. Partner with Finance and RevOps for budget pacing, forecast accuracy, and marketing spend efficiency. Provide analytics support to product marketing, growth, events, and partnerships to enable insight-led decisions. Run lead scoring and attribution modeling and scenario analysis to guide investment across campaigns and markets. Lead monthly and quarterly business reviews, surfacing insights and recommending pivots. Qualifications 6–10 years of experience in marketing operations and analytics roles in a B2B SaaS company. Proven track record of supporting $10M–$100M ARR growth through operational excellence. Deep hands-on experience with HubSpot across marketing automation, workflows, segmentation, and reporting. Strong understanding of GTM funnels, pipeline metrics, attribution models, and lifecycle marketing. Excellent cross-functional collaborator with Sales, SDR, Product Marketing, and Growth teams. An initiative taker, “thinker and doer”, who’s highly structured, detail-oriented, and hands-on problem solver and executor. Bonus: You’re a certified HubSpot whiz or power user with automation and CRM workflows mastery. 🎯 Success = GTM Growth Enablement This role is central to Leena AI’s next stage of growth. Your success will be measured by: Operational efficiency, stability, and reliability Acceleration in MQL > Opportunity conversion rates Improvements in pipeline velocity Optimized CAC and campaign ROI Scalable systems and data-driven decision making across the GTM engine Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Qualcomm India Private Limited Job Area Information Technology Group, Information Technology Group > IT Data Engineer General Summary Developer will play an integral role in the PTEIT Machine Learning Data Engineering team. Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and services. 5+ years of experience with Python or equivalent programming using OOPS, Data Structures and Algorithms Develop new services in AWS using server-less and container-based services. 3+ years of hands-on experience with AWS Suite of services (EC2, IAM, S3, CDK, Glue, Athena, Lambda, RedShift, Snowflake, RDS) 3+ years of expertise in scheduling data flows using Apache Airflow 3+ years of strong data modelling (Functional, Logical and Physical) and data architecture experience in Data Lake and/or Data Warehouse 3+ years of experience with SQL databases 3+ years of experience with CI/CD and DevOps using Jenkins 3+ years of experience with Event driven architecture specially on Change Data Capture 3+ years of Experience in Apache Spark, SQL, Redshift (or) Big Query (or) Snowflake, Databricks Deep understanding building the efficient data pipelines with data observability, data quality, schema drift, alerting and monitoring. Good understanding of the Data Catalogs, Data Governance, Compliance, Security, Data sharing Experience in building the reusable services across the data processing systems. Should have the ability to work and contribute beyond defined responsibilities Excellent communication and inter-personal skills with deep problem-solving skills. Minimum Qualifications 3+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 5+ years of IT-related work experience without a Bachelor’s degree. 2+ years of any combination of academic or work experience with programming (e.g., Java, Python). 1+ year of any combination of academic or work experience with SQL or NoSQL Databases. 1+ year of any combination of academic or work experience with Data Structures and algorithms. 5 years of Industry experience and minimum 3 years experience in Data Engineering development with highly reputed organizations Proficiency in Python and AWS Excellent problem-solving skills Deep understanding of data structures and algorithms Proven experience in building cloud native software preferably with AWS suit of services Proven experience in design and develop data models using RDBMS (Oracle, MySQL, etc.) Desirable Exposure or experience in other cloud platforms (Azure and GCP) Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark Working experience on Data Lakehouse platforms (One House, Databricks Lakehouse) Working experience on Data Lakehouse File Formats (Delta Lake, Iceberg, Hudi) Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3074716 Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JOB DESCRIPTION: Role: DevOps/ Site Reliability Engineer* Location: Pune Experience: 7+ Years Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM – 9:30 PM IST *About the Role* We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. *Key Responsibilities* Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. *Required Skills & Experience* 5–10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking — understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills — capable of driving improvements through PRs and design reviews. *Tech Stack & Tools* Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM If interested, share your resume on aditya.dhumal@leanitcorp.com Show more Show less

Posted 2 months ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here 🧠 Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies 🤖 Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises 🛠️ Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness 🌍 Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less

Posted 2 months ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here 🧠 Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies 🤖 Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises 🛠️ Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness 🌍 Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less

Posted 2 months ago

Apply

1.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Overview Cvent is a leading meetings, events and hospitality technology provider with more than 4,800 employees and nearly 22,000 customers worldwide. Founded in 1999, the company delivers a comprehensive event marketing and management platform for event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. The DNA of Cvent is our people, and our culture has an emphasis on fostering intrapreneurship --a system that encourages Cventers to think and act like individual entrepreneurs and empowers them to act, embrace risk, and make decisions as if they had founded the company themselves. We foster an environment that promotes agility, which means we don’t have the luxury to wait for perfection. At Cvent, we value the diverse perspectives that each individual brings. Whether working with a team of colleagues or with clients, we ensure that we foster a culture that celebrates differences and builds shared connections. About The Role As a key member of our Marketing Technology and Automation team, you will play a crucial role in leveraging technology to automate and elevate our global marketing programs. This position requires you to have experience with marketing technology as you will be an administrator of our marketing automation platform, Marketo. You will work closely with various teams to implement initiatives, support marketing system administration, ensure governance, and analyze performance. Your responsibilities will include developing and executing programs in Marketo to drive demand generation and enhance prospect and customer engagement. You will also support lead nurturing, scoring, dynamic segmentation, and database optimization efforts. Additionally, you will manage integrations with Marketo, Salesforce, and other marketing technologies, while proactively researching and implementing the latest best practices and strategies. Join us in this exciting opportunity to make a significant impact on our marketing automation efforts, drive demand generation, and contribute to the growth and success of Cvent. In This Role, You Will Develop and execute programs in Marketo to drive demand generation and increase prospect and customer engagement. Support essential initiatives like lead nurturing, scoring, dynamic segmentation, and database optimization. Maintain and support integrations to Marketo, Salesforce, and other marketing technologies. Manage marketing automation efforts and processes, proactively researching and implementing the latest best practices, strategies, and industry standards. Design and execute data management programs to bring better alignment between systems. Build and analyze reporting to show technical and automation effectiveness and trends. Here's What You Need 1-4 years of experience using a marketing automation tool (Marketo preferred; Hubspot, Salesforce Marketing Cloud, or Eloqua also welcomed). Understanding of Marketing Automation and demand generation concepts and ability to implement the same using a Marketing Automation platform. Attention to detail, deadlines, and the ability to prioritize and execute multiple tasks. Excellent communication, problem-solving, teamwork, and future-thinking skills. Ability to dig in to understand user requirements and expectations and deliver on them. Fair understanding of CRM (preferably Salesforce) system and setup. Experience with integrated marketing tools like Marketo, Salesforce, Cvent, 6sense, Reachdesk, Drift, Bizible, Vidyard, and more. Experience working in a fast-paced, collaborative environment. Demonstrated ability working with a globally dispersed team. Basic knowledge of HTML. Show more Show less

Posted 2 months ago

Apply

1.0 - 2.0 years

0 Lacs

Hyderābād

On-site

Join our applied-ML team to help turn data into product features—recommendation engines, predictive scores, and intelligent dashboards that ship to real users. You’ll prototype quickly, validate with metrics, and productionise models alongside senior ML engineers. Day-to-Day Responsibilities Clean, explore, and validate datasets (Pandas, NumPy, SQL) Build and evaluate ML/DL models (scikit-learn, TensorFlow / PyTorch) Develop reproducible pipelines using notebooks → scripts → Airflow / Kubeflow Participate in feature engineering, hyper-parameter tuning, and model-selection experiments Package and expose models as REST/gRPC endpoints; monitor drift & accuracy in prod Share insights with stakeholders through visualisations and concise reports Must-Have Skills 1–2 years building ML models in Python Solid understanding of supervised learning workflows (train/validate/test, cross-validation, metrics) Practical experience with at least one deep-learning framework (TensorFlow or PyTorch) Strong data-wrangling skills (Pandas, SQL) and basic statistics (A/B testing, hypothesis testing) Version-control discipline (Git) and comfort with Jupyter-based experimentation Good-to-Have Familiarity with MLOps tooling (MLflow, Weights & Biases, Sagemaker) Exposure to cloud data platforms (BigQuery, Snowflake, Redshift) Knowledge of NLP or CV libraries (spaCy, Hugging Face Transformers, OpenCV) Experience containerising ML services with Docker and orchestrating with Kubernetes Basic understanding of data-privacy and responsible-AI principles Job Types: Full-time, Permanent Pay: From ₹19,100.00 per month Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Fixed shift Monday to Friday Experience: Junior Machine-Learning Engineer: 1 year (Preferred) Work Location: In person

Posted 2 months ago

Apply

3.0 years

0 - 0 Lacs

Coimbatore

Remote

ML Engineer | 3+ years | Remote | Work Timing: Standard IST Job Description: We are looking for a skilled Machine Learning Engineer with hands-on experience deploying models on Google Cloud Platform (GCP) using Vertex AI. This role involves enabling real-time and batch model inferencing based on specific business requirements, with a strong focus on production-grade ML deployments. Key Responsibilities: Deploy machine learning models on GCP using Vertex AI. Design and implement real-time and batch inference pipelines. Monitor model performance, detect drift, and manage lifecycle. Ensure adherence to model governance best practices and support ML-Ops workflows. Collaborate with cross-functional teams to support Credit Risk, Marketing, and Customer Service use cases, especially within the retail banking domain. Develop scalable and maintainable code in Python and SQL. Work with diverse datasets, perform feature engineering, and build, train, and fine-tune advanced predictive models. Contribute to model deployment in the lending space. Required Skills & Experience: Strong expertise in Python and SQL. Proficient with ML libraries and frameworks such as scikit-learn, pandas, NumPy, spaCy, CatBoost, etc. In-depth knowledge of GCP Vertex AI and ML pipeline orchestration. Experience with ML-Ops and model governance. Exposure to use cases in retail banking—Credit Risk, Marketing, and Customer Service. Experience working with structured and unstructured data. Nice to Have: Prior experience deploying models in the lending domain. Understanding of regulatory considerations in financial services. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Application Question(s): Are you ready to move Onsite | Bangalore/Pune? Education: Bachelor's (Preferred) Experience: ML Engineer: 3 years (Required)

Posted 2 months ago

Apply

0 years

0 Lacs

India

Remote

Job Listing Detail Summary Gainwell is seeking LLM Ops Engineers and ML Ops Engineers to join our growing AI/ML team. This role is responsible for developing, deploying, and maintaining scalable infrastructure and pipelines for Machine Learning (ML) models and Large Language Models (LLMs). You will play a critical role in ensuring smooth model lifecycle management, performance monitoring, version control, and compliance while collaborating closely with Data Scientists, DevOps. Your role in our mission Core LLM Ops Responsibilities: Develop and manage scalable deployment strategies specifically tailored for LLMs (GPT, Llama, Claude, etc.). Optimize LLM inference performance, including model parallelization, quantization, pruning, and fine-tuning pipelines. Integrate prompt management, version control, and retrieval-augmented generation (RAG) pipelines. Manage vector databases, embedding stores, and document stores used in conjunction with LLMs. Monitor hallucination rates, token usage, and overall cost optimization for LLM APIs or on-prem deployments. Continuously monitor models for its performance and ensure alert system in place. Ensure compliance with ethical AI practices, privacy regulations, and responsible AI guidelines in LLM workflows. Core ML Ops Responsibilities: Design, build, and maintain robust CI/CD pipelines for ML model training, validation, deployment, and monitoring. Implement version control, model registry, and reproducibility strategies for ML models. Automate data ingestion, feature engineering, and model retraining workflows. Monitor model performance, drift, and ensure proper alerting systems are in place. Implement security, compliance, and governance protocols for model deployment. Collaborate with Data Scientists to streamline model development and experimentation. What we're looking for Bachelor's/Master’s degree in computer science, Engineering, or related fields. Strong experience with ML Ops tools (Kubeflow, MLflow, TFX, SageMaker, etc.). Experience with LLM-specific tools and frameworks (LangChain,Lang Graph, LlamaIndex, Hugging Face, OpenAI APIs, Vector DBs like Pinecone, FAISS, Weavite, Chroma DB etc.). Solid experience in deploying models in cloud (AWS, Azure, GCP) and on-prem environments. Proficient in containerization (Docker, Kubernetes) and CI/CD practices. Familiarity with monitoring tools like Prometheus, Grafana, and ML observability platforms. Strong coding skills in Python, Bash, and familiarity with infrastructure-as-code tools (Terraform, Helm, etc.).Knowledge of healthcare AI applications and regulatory compliance (HIPAA, CMS) is a plus. Strong skills in Giskard, Deepeval etc. What you should expect in this role Fully Remote Opportunity – Work from anywhere in the India Minimal Travel Required – Occasional travel opportunities (0-10%). Opportunity to Work on Cutting-Edge AI Solutions in a mission-driven healthcare technology environment. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies