Jobs
Interviews

677 Drift Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture, including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less

Posted 1 month ago

Apply

2.5 - 5.0 years

5 - 11 Lacs

India

On-site

We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

12.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Location : Ahmedabad Reported by: Team of 70 - (Software & Design, Product team and R&D (4) Reporting to : Promoter /Director JOB DETAILS : Domain & Experience Requirements 12+ years engineering; 5+ years Senior Technology Leadership (CTO, VP Engg, CIO). Demonstrated success in healthcare / wellness / fitness SaaS or medical-device Integration Role & Responsibilities: Role Summary Own the end-to-end technology and information stack for our cloud-native HMIS / Tele-health SaaS platform. Provide visionary architectural leadership (system, application, cloud, and data), harden security & compliance. Create a hands-on, jovial, ownership-driven engineering culture, and deliver new AI-powered products at startup velocity—while scaling to the level of a nationwide public-health utility. Key Responsibilities Architecture Leadership – Define & evolve System, Application, Cloud and Database architectures for a multi-tenant, high-availability SaaS platform; maintain direct involvement in the product roadmap. Cloud & Infrastructure – Own cloud cost, performance and DR across two GCP regions; manage internal IT (laptops, SASE, MDM) with zero-trust controls. AI / ML Enablement – Embed AI/ML capabilities (ASR + RAG summarisation, anomaly detection) into HMIS modules; evaluate and productionise new models. Security, Risk & Compliance – Lead ISO 27001, SOC 2, HIPAA, NABH compliance; enforce DevSecOps, threat-modelling, pen-testing and vulnerability management. Product Documentation & SDLC – Set up and enforce a seamless SDLC establish and audit SOPs; ensure every service, run-book and recovery plan is documented. People & Culture – Foster a culture of innovation, collaboration and continuous improvement; keep teams motivated and jovial; coach by example on the dev floor. Pre-Sales & Client Engagement – Partner with the sales team to design solution demos, proofs-of-concept and technical bid responses; engage directly with C-suite stakeholders at prospective hospitals to gather requirements and position the platform’s value proposition. Stakeholder & Vendor Management – Translate clinical requirements into epics; present tech OKRs to the Board; manage cloud, AI and medical-device vendors. Technical Expertise & Expectations Holistic Architecture Leadership – Shape system, application, cloud, and data architectures that stay performant, maintainable, and cost-eIicient as the platform scales. SaaS Platform Stewardship – Guide the evolution of a multi-tenant, always-on health-technology product, balancing feature delivery with platform reliability. Hands-On Engineering – Stay close to the code: review pull requests, troubleshoot production issues, and mentor engineers through practical example. Product Partnership – Convert business and clinical requirements into clear technical roadmaps and measurable engineering objectives. AI / ML Awareness – Identify pragmatic opportunities to embed data-driven and AI capabilities that enhance clinical workflows and user experience. Process & SDLC Ownership – Establish robust DevSecOps, CI/CD, infrastructure-as-code, and documentation practices that keep releases predictable and secure. Security, Risk & Compliance Oversight – Maintain a proactive security posture, comprehensive SOPs, and continuous compliance with relevant healthcare and data-protection standards. Health Interoperability Standards – Knowledge of FHIR, DICOM, SNOMED CT, HL7, and related standards is highly desirable. Technology Foresight – Monitor emerging trends, assess their relevance, and pilot new tools or patterns that can strengthen the platform. Embedded Systems & Hardware Insight – Knowledge of firmware, IoT, or medical-device hardware development is seen as a distinguishing factor. Personal Qualities Ownership mentality – treats uptime, cost and code quality as personal responsibilities. Methodical planner – works to clear quarterly and sprint plans; avoids scope drift. Visible, hands-on leader – is present on the dev floor; white-boards solutions; joins incident calls. Jovial motivator – energises stand-ups, celebrates wins, runs hack-days. Qualification : Education: Bachelor's or Master's degree in Computer Science, Engineering, or related field. MBA or advanced healthcare-related qualification is a plus. Please expedite and send the updated resume along with confirmation of interest. Regards, Pooja Raval - Sr. Consultant / TL Send CV and Reply mail to: unitedtechit@uhr.co.in; Will call you for detail discussion, if your Profile has relevant experience. Show more Show less

Posted 1 month ago

Apply

0.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role Overview As a Test Automation Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX) , backend APIs , and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy , fairness , stability , and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs. Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps). Requirements Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

About Us: Waltcorp is at the forefront of cloud engineering, helping businesses transform their operations by leveraging the power of Google Cloud Platform (GCP) . We are seeking a skilled and visionary GCP DevOps Solutions Architect – ML/AI Focus to design and implement cloud solutions that address our clients' complex business challenges. Key Responsibilities: Solution Design: Collaborate with stakeholders to understand business requirements and design scalable, secure, and high-performing GCP cloud architectures . Technical Leadership: Serve as a technical advisor, guiding teams on GCP best practices, services, and tools to optimize performance, security, and cost efficiency. Infrastructure Development: Architect and oversee the deployment of cloud solutions using GCP services such as Compute Engine, Cloud Storage, Cloud Functions, Cloud SQL , and more. Infrastructure as Code (IaC) & Cloud Automation: Design, implement, and manage infrastructure using Terraform, Google Cloud Deployment Manager , or Pulumi . Automate provisioning of compute, storage, and networking resources using GCP services like Compute Engine, Cloud Storage, VPC, IAM, GKE (Google Kubernetes Engine), Cloud Run . Implement and maintain CI/CD pipelines (using Cloud Build, Jenkins, GitHub Actions , or GitLab CI ). ML Model Deployment & Automation (MLOps): Build and optimize end-to-end ML pipelines using Vertex AI Pipelines, Kubeflow , or MLflow . Automate training, testing, validation, and deployment of ML models in staging and production environments. Support model versioning, reproducibility, and lineage tracking using tools like DVC, Vertex AI Model Registry , or MLflow . Monitoring & Logging: Implement monitoring for both infrastructure and ML workflows using Cloud Monitoring, Prometheus, Grafana, Vertex AI Model Monitoring . Set up alerting for anomalies in ML model performance (data drift, concept drift). Ensure application logs, model outputs, and system metrics are centralized and accessible. Containerization & Orchestration: Containerize ML workloads using Docker and orchestrate using GKE or Cloud Run . Optimize resource usage through autoscaling and right-sizing of ML workloads in containers. Data & Experiment Management: Integrate with data versioning tools (e.g., DVC or LakeFS ) to track datasets used in model training. Enable experiment tracking using MLflow, Weights & Biases , or Vertex AI Experiments . Support reproducible research and automated experimentation pipelines. Client Engagement: Communicate complex technical solutions to non-technical stakeholders and deliver high-level architectural designs, presentations, and proposals. Integration and Migration: Plan and execute cloud migration strategies, integrating existing on-premises systems with GCP infrastructure . Security and Compliance: Implement robust security measures, including IAM policies, encryption, and monitoring , to ensure compliance with industry standards and regulations. Documentation: Develop and maintain detailed technical documentation for architecture designs, deployment processes, and configurations. Continuous Improvement: Stay current with GCP advancements and emerging trends , recommending updates to architecture strategies and tools. Qualifications: Educational Background: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Experience: 3+ years of experience in cloud architecture, with a focus on GCP . Technical Expertise: Strong knowledge of GCP core services , including compute, storage, networking, and database solutions. Proficiency in Infrastructure as Code (IaC) tools like Terraform , Deployment Manager , or Pulumi . Experience with containerization and orchestration tools (e.g., Docker , Kubernetes , GKE , or Cloud Run ). Understanding of DevOps practices, CI/CD pipelines, and automation . Strong command of networking concepts such as VPCs, load balancing , and firewall rules . Familiarity with scripting languages like Python or Bash . Preferred Qualifications: Google Cloud Certified – Professional Cloud Architect or Professional DevOps Engineer . Expertise in engineering and maintaining MLOps and AI applications . Experience in hybrid cloud or multi-cloud environments . Familiarity with monitoring and logging tools such as Cloud Monitoring, ELK Stack , or Datadog . [CLOUD-GCDEPS-J25] Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 450000 - Rs 550000 (ie INR 4.5-5.5 LPA) Min Experience: 1 years JobType: full-time We are looking for a research-oriented Environmental Data Scientist to spearhead the development of advanced algorithms that improve the accuracy, reliability, and overall performance of air quality sensor data. This role is focused on addressing real-world challenges in environmental sensing, such as sensor drift, cross-interference, and anomalous data behavior — going beyond conventional data science applications. Requirements Key Responsibilities: Develop and implement algorithms for sensor calibration, signal correction, anomaly detection, and cross-interference mitigation to improve air quality data accuracy and stability. Conduct research on sensor behavior and environmental impacts to guide algorithm design. Collaborate with software and embedded systems teams to integrate algorithms into cloud and edge environments. Analyze large-scale environmental datasets using Python, R, or similar data analysis tools. Validate and refine algorithms using both laboratory and field data through iterative testing. Create visualization tools and dashboards to interpret sensor behavior and assess algorithm effectiveness. Support environmental research initiatives with data-driven statistical analysis. Document methodologies, test results, and findings for internal knowledge sharing and system improvement. Contribute to team efforts by writing clean, efficient code and assisting in overcoming programming challenges. Required Skills & Qualifications: Bachelor's or Master's degree in Environmental Engineering, Environmental Science, Chemical Engineering, Electronics/Instrumentation Engineering, Computer Science, Data Science, Physics, or Atmospheric Science with a focus on data/sensing. 1-2 years of experience working with sensor data or IoT-based environmental monitoring systems. Strong background in algorithm development, signal processing, and statistical data analysis. Proficiency in Python (e.g., pandas, NumPy, scikit-learn) or R, with experience managing real-world sensor datasets. Ability to design and deploy models in cloud-based or embedded environments. Strong analytical thinking, problem-solving skills, and effective communication abilities. Genuine interest in environmental sustainability and clean technologies. Preferred Qualifications: Familiarity with time-series anomaly detection, signal noise reduction, sensor fusion, or geospatial data processing. Exposure to air quality sensor technology, environmental sensor datasets, or dispersion modeling Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating. Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Benefits Generous time off policies Top Shelf Benefits Education, wellness and lifestyle support Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Uttar Pradesh, India

On-site

Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the role: We are currently seeking a Machine Learning (Ops) - Engineer , to join our Digi team based in Noida. Key Responsibilities: ML Pipeline Development and Automation: Design, build, and maintain end-to-end AI/ML CI/CD pipelines using Azure DevOps and leveraging Azure AI Stack (e.g., Azure ML, AI Foundry …) and Dataiku Model Deployment and Monitoring: Deliver tooling to deploy AI/ML products into production, ensuring they meet performance, reliability, and security standards. Implement and maintain a transversal monitoring solutions to track model performance, detect drift, and trigger retraining when necessary Collaboration and Support: Work closely with data scientists, AI/ML engineers, and platform team to ensure seamless integration of products into production. Provide technical support and troubleshooting for AI/ML pipelines and infrastructure, particularly in Azure and Dataiku environments Operational Excellence : Define and implement MLOps best practices with a strong focus on governance, security, and quality, while monitoring performance metrics and cost-efficiency to ensure continuous improvement and delivering optimized, high-quality deployments for Azure AI services and Dataiku Documentation and Reporting: Maintain comprehensive documentation of AI/ML pipelines, and processes, with a focus on Azure AI and Dataiku implementations. Provide regular updates to the AI Platform Lead on system status, risks, and resource needs About you: Proven track record of experience in MLOps, DevOps, or related roles Strong knowledge of machine learning workflows, data analytics, and Azure cloud Hands-on experience with tools and technologies such as Dataiku, Azure ML, Azure AI Services, Docker, Kubernetes, and Terraform Proficiency in programming languages such as Python, with experience in ML and automation libraries (e.g., TensorFlow, PyTorch, Azure AI SDK …) Expertise in CI/CD pipeline management and automation tools using Azure DevOps Familiarity with monitoring tools and logging frameworks Catch this opportunity and invest in your skills development, should your profile meet these requirements. Additional attributes: A proactive mindset with a focus on operationalizing AI/ML solutions to drive business value Experience with budget oversight and cost optimization in cloud environments. Knowledge of agile methodologies and software development lifecycle (SDLC). Strong problem-solving skills and attention to detail Work Experience: 3-5 years of experience in MLOps Minimum Education: Advanced degree (Master’s or PhD preferred) in Computer Science, Data Science, Engineering, or a related field. What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a Senior Data Quality Engineer to join our innovative team. In this role, you will ensure the accuracy, reliability, and performance of data systems through cutting-edge test automation, performance optimization, and advanced validation techniques. You will play a key role in maintaining data integrity and optimizing workflows to support critical business decisions. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra

On-site

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30240361 Job Description Job Title: ML Platform Engineer – AI & Data Platforms ML Platform Engineering & MLOps (Azure-Focused) Build and manage end-to-end ML/LLM pipelines on Azure ML using Azure DevOps for CI/CD, testing, and release automation. Operationalize LLMs and generative AI solutions (e.g., GPT, LLaMA, Claude) with a focus on automation, security, and scalability. Develop and manage infrastructure as code using Terraform, including provisioning compute clusters (e.g., Azure Kubernetes Service, Azure Machine Learning compute), storage, and networking. Implement robust model lifecycle management (versioning, monitoring, drift detection) with Azure-native MLOps components. Infrastructure & Cloud Architecture Design highly available and performant serving environments for LLM inference using Azure Kubernetes Service (AKS) and Azure Functions or App Services. Build and manage RAG pipelines using vector databases (e.g., Azure Cognitive Search, Redis, FAISS) and orchestrate with tools like LangChain or Semantic Kernel. Ensure security, logging, role-based access control (RBAC), and audit trails are implemented consistently across environments. Automation & CI/CD Pipelines Build reusable Azure DevOps pipelines for deploying ML assets (data pre-processing, model training, evaluation, and inference services). Use Terraform to automate provisioning of Azure resources, ensuring consistent and compliant environments for data science and engineering teams. Integrate automated testing, linting, monitoring, and rollback mechanisms into the ML deployment pipeline. Collaboration & Enablement Work closely with Data Scientists, Cloud Engineers, and Product Teams to deliver production-ready AI features. Contribute to solution architecture for real-time and batch AI use cases, including conversational AI, enterprise search, and summarization tools powered by LLMs. Provide technical guidance on cost optimization, scalability patterns, and high-availability ML deployments. Qualifications & Skills Required Experience Bachelor’s or Master’s in Computer Science, Engineering, or a related field. 5+ years of experience in ML engineering, MLOps, or platform engineering roles. Strong experience deploying machine learning models on Azure using Azure ML and Azure DevOps. Proven experience managing infrastructure as code with Terraform in production environments. Technical Proficiency Proficiency in Python (PyTorch, Transformers, LangChain) and Terraform, with scripting experience in Bash or PowerShell. Experience with Docker and Kubernetes, especially within Azure (AKS). Familiarity with CI/CD principles, model registry, and ML artifact management using Azure ML and Azure DevOps Pipelines. Working knowledge of vector databases, caching strategies, and scalable inference architectures. Soft Skills & Mindset Systems thinker who can design, implement, and improve robust, automated ML systems. Excellent communication and documentation skills—capable of bridging platform and data science teams. Strong problem-solving mindset with a focus on delivery, reliability, and business impact. Preferred Qualifications Experience with LLMOps, prompt orchestration frameworks (LangChain, Semantic Kernel), and open-weight model deployment. Exposure to smart buildings, IoT, or edge-AI deployments. Understanding of governance, privacy, and compliance concerns in enterprise GenAI use cases. Certification in Azure (e.g., Azure Solutions Architect, Azure AI Engineer, Terraform Associate) is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. What you will Bring: Machine Learning Model Development: Collaborate with data scientists to develop and implement machine learning models, ensuring they meet performance and accuracy requirements. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766745 Show more Show less

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Delhi

On-site

Key Responsibilities: Chiller Maintenance: Conduct daily/weekly checks on water-cooled or air-cooled chillers Monitor suction/discharge pressures , temperatures, and refrigerant levels Clean evaporator and condenser tubes , inspect for scaling or fouling Perform oil and filter changes , refrigerant leak detection, and logbook entries AHU (Air Handling Unit): Inspect and clean coils, filters, blower fans, and dampers Check motor alignment, belt tension, bearing lubrication , and vibration Verify proper airflow, pressure drop , and temperature control operation Pumps: Maintain and troubleshoot chilled water and condenser water pumps Monitor mechanical seals, bearings, couplings , and motor health Record flow rates, differential pressures , and ensure proper operation VFDs (Variable Frequency Drives): Inspect VFD panels for cooling, dust buildup, and wiring issues Monitor drive performance, fault logs , and communication with BMS Coordinate with controls team for parameter settings and motor tunin Cooling Towers: Perform nozzle cleaning, fan inspection, drift eliminator checks Maintain proper chemical treatment and water level control Inspect and maintain gearbox, fan motor, and louvers Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹20,000.00 per month Schedule: Day shift Evening shift Morning shift Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Cvent is a leading provider of meetings, events, and hospitality technology, boasting over 4,800 employees and nearly 22,000 customers worldwide. Since our foundingin 1999, we have delivereda comprehensive event marketing and management platform for event professionals and offered software solutions to hotels, special event venues, and destinations. Our culture emphasizes intrapreneurship, encouraging Cventers to think and act like entrepreneurs. We value diverse perspectives and foster an environment that promotes agility and celebrates differences. Overview: Cvent’s Global Demand Center is seeking an organized, strategic marketing professional with AI experience to join our team as an Assistant Team Lead, Marketing Technology. Our ideal candidate is a skilled project manager with a passion for marketing technology, an understanding of how marketing systems intersect, and an eagerness to discover new solutions for business needs. At Cvent, you'll be part of a dynamic team that values innovation and creativity. You'll have the opportunity to work with cutting-edge technology and help drive our marketingefforts to new heights. If you're passionate about marketing technology and AI, and thrive in a collaborative environment, we want to hear from you! What You Will Be Doing Manage and Optimize AI-Driven Marketing Efforts : Oversee our end-to-end content supply chain and conversational AI initiatives, ensuring streamlined processes, especially those involving AI. Technical Expertise: Serve as a technical expert, onboarding new technologies and optimizing the use of existing tools in our marketing technology stack. Enablement and Training : Lead marketing technology enablement and training to ensure the marketing team fully utilizes the capabilities of our tools. Administration of AI Systems : Administer marketing AI systems (e.g., Conversational Email, chat AI, User Gems AI), build prompts and agents, and ensure effective tagging and categorization. Reporting and ROI Analysis: Assist marketing teams in reporting on the ROI of AI initiatives and participate in the Cvent AI council. Gap Identification and Requirement Development: Identify gaps and develop requirements for the automation of manual tasks to enhance marketing efficiency and effectiveness. Collaboration and Implementation: Collaborate with marketing team members to implement efficient AI strategies across different teams. Participate in the Cvent Machine Learning Academy. Evaluation of New Technologies: Evaluate new AI-focused marketing technologies for alignment with business objectives. Stay Updated on AI Trends: Stay abreast of the latest AI trends and innovations, recommending and implementing new tools or practices to enhance marketing efforts. What You Will Need for this Position Bachelor’s/Master’s degree in Marketing, Business, or a related field. Exceptional project management skills, including attention to detail, stakeholder engagement, project plan development, and deadline management with diverse teams. Advancedunderstanding of AI concepts and significant hands-onexperience with tools like ChatGPT, Microsoft Azure, Claude, Google Gemini, Glean, etc. Skilledin crafting technicaldocumentation and simplifying complex procedures. A minimum of 5 years of hands-ontechnical experience with various marketingtechnologies like marketing automation platforms, CRM and database platforms (e.g., Salesforce, Snowflake) and other tools (e.g., Drift, Cvent, 6Sense, Writer, Jasper.ai, Copy.ai) Strongcapacity for understanding and fulfilling projectrequirements and expectations. Excellent communication and collaboration skills, with a strong commandof the English language. Self-motivated, analytical, eager to learn, and able to thrive in a team environment. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role We are seeking a highly skilled and hands-on Senior Data Scientist with 7+ years of experience to lead the design and implementation of robust machine learning pipelines and drive data-driven decision making across the organization. This role requires a strategic thinker who can bridge the gap between complex data science concepts and practical business solutions, while ensuring model integrity, explainability, and compliance in production environments. As a Senior Data Scientist, you will have end-to-end ownership of the model life cycle, from data ingestion and feature engineering to model deployment, monitoring, and governance. Youll work closely with AI engineers, product teams, and stakeholders to deliver high-impact solutions that drive business value. Key Responsibilities Machine Learning & Predictive Modeling : Design and build sophisticated predictive models using Scikit-learn, XG Boost, LightGBM, and CatBoost for various use cases. Develop advanced forecasting models using Prophet, ARIMA, and neural forecasting techniques for time series analysis. Implement anomaly detection systems and risk scoring models for fraud detection and security applications Create recommendation systems and personalization algorithms using collaborative filtering and deep learning approaches AI Integration & Pipeline Development Collaborate with AI engineers to integrate traditional ML components into LangChain and LLM-driven intelligent systems Design hybrid architectures that combine classical ML with generative AI for enhanced business solutions Develop evaluation frameworks for comparing traditional ML and LLM based approaches Implement retrieval systems that enhance LLM performance with domain specific knowledge Model Lifecycle Management Automate comprehensive model lifecycle processes including training, validation, deployment, and rollback procedures Implement continuous training pipelines using MLFlow, Kubeflow, and Weights & Biases Design and maintain model monitoring systems for drift detection, performance degradation, and data quality issues Establish model governance frameworks ensuring reproducibility and auditability Data Validation & Quality Assurance Lead the development of pre-model and post-model validation frameworks using DeepChecks, Great Expectations, and custom validation rules Implement fairness and bias detection systems using Fairlearn and custom algorithmic auditing tools Design comprehensive data quality monitoring and alerting systems Conduct statistical testing and hypothesis validation for model performance claims Compliance & Security Ensure PII protection and DPDP compliance through secure data preprocessing and anonymization techniques Implement synthetic data generation pipelines using Gretel.ai and other privacy-preserving technologies Design policy-driven access controls and data governance frameworks using Apache Griffin and DataHub Conduct privacy impact assessments and implement differential privacy techniques where applicable Model Explainability & Auditing Develop comprehensive model explainability frameworks using SHAP, LIME, and custom interpretation tools Conduct reasoning-based walkthroughs and accuracy audits for deployed models Perform bias analysis and fairness assessments across different demographic groups Design and implement A/B testing frameworks for model performance evaluation Data Engineering & Pipeline Architecture Design and implement scalable ETL/ELT pipelines using Apache Spark, Flink, and modern data processing frameworks Leverage Redis for intelligent caching strategies and real-time feature serving Implement streaming data processing using Apache Kafka, RabbitMQ, and event-driven architectures Optimize data pipeline performance and ensure data consistency across distributed systems (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role Overview We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving. Key Responsibilities Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems. Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker. Automate ML workflows using CI/CD best practices and tools. Ensure model reproducibility, governance, and performance tracking. Monitor deployed models for data drift, model decay, and performance metrics. Implement robust versioning and model registry systems. Apply security, performance, and compliance best practices across ML systems. Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities. Required Skills & Qualifications 4+ years of experience in Software Engineering or MLOps, preferably in a production environment. Proven experience with AWS services, especially AWS Sagemaker for model development and deployment. Working knowledge of AWS DataZone (preferred). Strong programming skills in Python, with exposure to R, Scala, or Apache Spark. Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes). Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools. Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Solid understanding of DevOps and cloud-native infrastructure practices. Excellent problem-solving skills and the ability to work collaboratively across teams. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About This Role About This Role Data Strategy & Solutions (DS&S) is accelerating the future of investment research at BlackRock with alternative data, insights, and emerging technology. We're a team of skilled Data Engineers, Content & DataOps Specialists who are dedicated to craft data-driven investment solutions. DS&S seeks an experienced, high-reaching and passionate Associate to collaborate closely with investment and technology teams to develop and manage data products derived from web-harvested and unstructured sources, drive research efficiency and enable data-driven insights. This role offers a unique opportunity to contribute to the design and execution of data operations, with a clear trajectory toward leadership. You should be Someone who is passionate about solving sophisticated business problems through data! Someone who can build and manage production-grade data pipelines for large-scale unstructured and web-harvested data and is capable of operating data products end-to-end, from raw acquisition through to quality control, delivery, and issue resolution. Enthusiastic about establishing and maintaining standard methodologies for data engineering, focusing on data quality, security, and scalability. Curious and stays up to date with the latest data management trends and technologies to enhance internal processes and tools Enthusiastic about collaboration with researchers, technologists, and portfolio managers to ensure data products are well-aligned with investment needs. Looking forward to contributing to technical roadmap and team development, with an eye toward growing into a leadership role within 2–3 years. What We’re Looking For 3-5 years of experience in data engineering or data product operations Proficiency in Python and SQL, especially in Snowflake and BigQuery Experience with web scraping frameworks and data governance Someone capable of working with unstructured or alternative data sources Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions, along with experience with Snowflake for data modeling and performance tuning. Strong problem-solving skills and attention to detail. Expertise in anomaly detection, data drift, or schema changes Perspective of treating datasets as products. Lay SLAs, roadmaps, and metrics that can derive change Project management experience using Agile, Kanban, or similar methodologies Excellent documentation skills for technical specs, runbooks, and SOPs Effective communication skills for collaboration between data engineering and business teams Leadership readiness, demonstrated through mentoring, roadmap planning, and team coordination. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 1 month ago

Apply

25.0 years

4 - 7 Lacs

Cochin

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview The Endpoint Support Analyst will provide critical day to day support for Windows and Mac devices. The analyst will work on ServiceNow user tickets and basic project work. The candidate must be self-motivated and autonomous, proficient in communication both written and verbal, and experienced with ITIL processes. Troubleshoot issues with Windows and Mac device enrollment using Microsoft Autopilot and JAMF over the air provisioning. Support OS patching, Microsoft Office, and Google Chrome changes, and review and solve hardware driver issues. Monitor and remediate compliance and configuration drift using reports, proactive remediation scripts, and integrated analytics tools such as Log Analytics. Understanding of Group Policy Objects (GPOs) and Conditional Access policies Research and resolves systemic issues and problems with software and hardware on Windows and Mac systems. Follows escalation procedures when appropriate to resolve processing problems and user problems in a timely manner and meet service levels and other standards for the job Collaborate with the Service Desk and other L1 teams to identify systemic issues and coordinate investigation and solution implementation. Completes project assignments and ad-hoc project needs commensurate with job expectations. Basic Qualifications: Bachelor’s degree and 2 years of Information Systems experience OR Associate’s degree and 4 years of Information Systems experience OR High school diploma / GED and 8 years of Information Systems experience Preferred Qualifications: 4+ years providing end-user support in a multi-system environment including issue resolution, upgrades/patching, and general management across PC, Mac, Tablet, Smartphones, VDIs and peripherals Working knowledge of MS Office Suite and Browser management required. PowerShell, python or other scripting tools would be very helpful 2+ years working with Intune, JAMF, ServiceNow, and NextThink or 1e Tachyon. Working knowledge of Agile methodology Ability to address rapidly changing priorities in a fast-paced environment Familiar with ITIL-based processes and the use of ServiceNow or similar management platform Excellent communication, interpersonal skills, and writing skills with ability to understand customer needs Passionate about customer service and how it can transform businesses Excellent project management skills and ability to multitask with ease The Endpoint Support Analyst will provide critical day to day support for Windows and Mac devices. The analyst will work on ServiceNow user tickets and basic project work. The candidate must be self-motivated and autonomous, proficient in communication both written and verbal, and experienced with ITIL processes. Troubleshoot issues with Windows and Mac device enrollment using Microsoft Autopilot and JAMF over the air provisioning. Support OS patching, Microsoft Office, and Google Chrome changes, and review and solve hardware driver issues. Monitor and remediate compliance and configuration drift using reports, proactive remediation scripts, and integrated analytics tools such as Log Analytics. Understanding of Group Policy Objects (GPOs) and Conditional Access policies Research and resolves systemic issues and problems with software and hardware on Windows and Mac systems. Follows escalation procedures when appropriate to resolve processing problems and user problems in a timely manner and meet service levels and other standards for the job Collaborate with the Service Desk and other L1 teams to identify systemic issues and coordinate investigation and solution implementation. Completes project assignments and ad-hoc project needs commensurate with job expectations. Basic Qualifications: Bachelor’s degree and 2 years of Information Systems experience OR Associate’s degree and 4 years of Information Systems experience OR High school diploma / GED and 8 years of Information Systems experience Preferred Qualifications: 4+ years providing end-user support in a multi-system environment including issue resolution, upgrades/patching, and general management across PC, Mac, Tablet, Smartphones, VDIs and peripherals Working knowledge of MS Office Suite and Browser management required. PowerShell, python or other scripting tools would be very helpful 2+ years working with Intune, JAMF, ServiceNow, and NextThink or 1e Tachyon. Working knowledge of Agile methodology Ability to address rapidly changing priorities in a fast-paced environment Familiar with ITIL-based processes and the use of ServiceNow or similar management platform Excellent communication, interpersonal skills, and writing skills with ability to understand customer needs Passionate about customer service and how it can transform businesses Excellent project management skills and ability to multitask with ease Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 1 month ago

Apply

8.0 years

6 - 8 Lacs

Hyderābād

On-site

Key Responsibilities Test Strategy & Leadership: Define end-to-end test plans for AI solutions (OCR, NLP, document automation) including functional, regression, UAT, and performance testing. Lead a team of QA engineers in Agile/Scrum environments. AI Product Testing: Validate OCR accuracy (Google/Azure OCR), AI model outputs (Layout Parsing, data extraction), and NLP logic across diverse document types (invoices, contracts). Design tests for edge cases: low-quality scans, handwritten text, multi-language docs. Automation & Tooling: Develop/maintain automated test scripts (Selenium, Cypress, pytest) for UI, API, and data validation. Integrate testing into CI/CD pipelines (Azure DevOps/Jenkins). Quality Advocacy: Collaborate with AI engineers and BAs to identify risks, document defects, and ensure resolution. Report on test metrics (defect density, false positives, model drift). Client-Focused Validation: Lead on-site/client UAT sessions for Professional Services deployments. Ensure solutions meet client SLAs (e.g., >95% extraction accuracy). Required Skills & Experience Experience: 8+ years in software testing , including 3+ years testing AI/ML products (OCR, NLP, computer vision). Proven experience as a Test Lead managing teams (5+ members). Technical Expertise: Manual Testing: Deep understanding of AI testing nuances (training data bias, model drift, confidence scores). Test Automation: Proficiency in Python/Java , Selenium/Cypress , and API testing (Postman/RestAssured). AI Tools: Hands-on experience with Azure AI , Google Vision OCR, or similar. Databases: SQL/NoSQL (MongoDB) validation for data pipelines. Process & Methodology: Agile/Scrum, test planning, defect tracking (JIRA), and performance testing (JMeter/Locust). Knowledge of MLOps/testing practices for AI models. Preferred Qualifications Experience with document-intensive domains (P2P, AP, insurance). Certifications: ISTQB Advanced , AWS/Azure QA , or AI testing certifications . Familiarity with GenAI testing (LLM validation, hallucination checks). Knowledge of containerization (Docker/Kubernetes).

Posted 1 month ago

Apply

4.0 years

11 Lacs

Mohali

On-site

Skill Sets: Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow) Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models Strong experience in NLP, fine-tuning transformer models, and dataset preparation Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI) Experience in containerization (Docker, Kubernetes) and CI/CD pipelines Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning) Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection Roles and Responsibilities: Design and implement end-to-end ML pipelines from data ingestion to production Develop, fine-tune, and optimize ML models, ensuring high performance and scalability Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc) Automate model retraining, monitoring, and drift detection Collaborate with engineering teams for seamless ML integration Mentor junior team members and enforce best practices Job Type: Full-time Pay: Up to ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): How soon can you join us Experience: Total: 4 years (Required) Data Science roles: 3 years (Required) Work Location: In person

Posted 1 month ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Test Engineering General Summary We are seeking a Engineer AI System-Level Test Engineer to lead end-to-end testing of Retrieval-Augmented Generation (RAG) AI systems for Hybrid, Edge-AI Inference solutions. This role will focus on designing, developing, and executing comprehensive test strategies for evaluating the reliability, accuracy, usability and scalability of large-scale AI models integrated with external knowledge retrieval systems. The ideal candidate needs to have deep expertise in AI testing methodologies, experience with large language models (LLMs), expertise in building test solutions for AI Inference stacks, RAG, search/retrieval architecture, and a strong background in automation frameworks, performance validation, and building E2E automation architecture. Experience testing large-scale generative AI applications, familiarity with LangChain, LlamaIndex, or other RAG-specific frameworks, and knowledge of adversarial testing techniques for AI robustness are preferred qualifications Key Responsibilities Test Strategy & Planning Define end-to-end test strategies for RAG, retrieval, generation, response coherence, and knowledge correctness Develop test plans & automation frameworks to validate system performance across real-world scenarios. Hands-on experience in benchmarking and optimizing Deep Learning Models on AI Accelerators/GPUs Implement E2E solutions to integrate Inference systems with customer software workflows Identify and implement metrics to measure retrieval accuracy, LLM response quality Test Automation Build automated pipelines for regression, integration, and adversarial testing of RAG workflows. Validate search relevance, document ranking, and context injection into LLMs using rigorous test cases. Collaborate with ML engineers and data scientists to debug model failures and identify areas for improvement. Conduct scalability and latency tests for retrieval-heavy applications. Analyze failure patterns, drift detection, and robustness against hallucinations and misinformation. Collaboration Work closely with AI research, engineering teams & customer teams to align testing with business requirements. Generate test reports, dashboards, and insights to drive model improvements. Stay up to date with the latest AI testing frameworks, LLM evaluation benchmarks, and retrieval models. Required Qualifications 1+ years of experience in AI/ML system testing, software quality engineering, or related fields. Bachelor’s or master’s degree in computer science engineering/ data science / AI/ML Hands-on experience with test automation frameworks (e.g., PyTest, Robot Framework, JMeter). Proficiency in Python, SQL, API testing, vector databases (e.g., FAISS, Weaviate, Pinecone) and retrieval pipelines. Experience with ML model validation metrics (e.g., BLEU, ROUGE, MRR, NDCG). Expertise in CI/CD pipelines, cloud platforms (AWS/GCP/Azure), and containerization (Docker, Kubernetes). Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075503 Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Systems Test Engineering General Summary Job Description We are seeking a Engineer AI System-Level Test Engineer to lead end-to-end testing of Retrieval-Augmented Generation (RAG) AI systems for Hybrid, Edge-AI Inference solutions. This role will focus on designing, developing, and executing comprehensive test strategies for evaluating the reliability, accuracy, usability and scalability of large-scale AI models integrated with external knowledge retrieval systems. The ideal candidate needs to have deep expertise in AI testing methodologies, experience with large language models (LLMs), expertise in building test solutions for AI Inference stacks, RAG, search/retrieval architecture, and a strong background in automation frameworks, performance validation, and building E2E automation architecture. Experience testing large-scale generative AI applications, familiarity with LangChain, LlamaIndex, or other RAG-specific frameworks, and knowledge of adversarial testing techniques for AI robustness are preferred qualifications Key Responsibilities Test Strategy & Planning Define end-to-end test strategies for RAG, retrieval, generation, response coherence, and knowledge correctness Develop test plans & automation frameworks to validate system performance across real-world scenarios. Hands-on experience in benchmarking and optimizing Deep Learning Models on AI Accelerators/GPUs Implement E2E solutions to integrate Inference systems with customer software workflows Identify and implement metrics to measure retrieval accuracy, LLM response quality Test Automation Build automated pipelines for regression, integration, and adversarial testing of RAG workflows. Validate search relevance, document ranking, and context injection into LLMs using rigorous test cases. Collaborate with ML engineers and data scientists to debug model failures and identify areas for improvement. Conduct scalability and latency tests for retrieval-heavy applications. Analyze failure patterns, drift detection, and robustness against hallucinations and misinformation. Collaboration Work closely with AI research, engineering teams & customer teams to align testing with business requirements. Generate test reports, dashboards, and insights to drive model improvements. Stay up to date with the latest AI testing frameworks, LLM evaluation benchmarks, and retrieval models. Required Qualifications 2+ years of experience in AI/ML system testing, software quality engineering, or related fields. Bachelor’s or master’s degree in computer science engineering/ data science / AI/ML Hands-on experience with test automation frameworks (e.g., PyTest, Robot Framework, JMeter). Proficiency in Python, SQL, API testing, vector databases (e.g., FAISS, Weaviate, Pinecone) and retrieval pipelines. Experience with ML model validation metrics (e.g., BLEU, ROUGE, MRR, NDCG). Expertise in CI/CD pipelines, cloud platforms (AWS/GCP/Azure), and containerization (Docker, Kubernetes). Why Join Us? Work on cutting-edge AI retrieval-augmented generation technologies Collaborate with world-class AI researchers and engineers. If you are passionate about AI system testing and ensuring the reliability of next-generation generative models, apply now! Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3074793 Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch. Develop Infrastructure as Code (IaC) using Terraform, and automate deployments with CI/CD pipelines. Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle—from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases. Implement frameworks for bias detection, explainability, and responsible AI. Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards, and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring, ing, and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services: EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git, and CI/CD pipelines. Proven track record in delivering AI/ML projects into production environments. Deep understanding of MLOps, model versioning, monitoring, and retraining pipelines. Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain, LLMs, Kubeflow, or GCP-based AI services. Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

The Endpoint Support Analyst will provide critical day to day support for Windows and Mac devices. The analyst will work on ServiceNow user tickets and basic project work. The candidate must be self-motivated and autonomous, proficient in communication both written and verbal, and experienced with ITIL processes. Troubleshoot issues with Windows and Mac device enrollment using Microsoft Autopilot and JAMF over the air provisioning. Support OS patching, Microsoft Office, and Google Chrome changes, and review and solve hardware driver issues. Monitor and remediate compliance and configuration drift using reports, proactive remediation scripts, and integrated analytics tools such as Log Analytics. Understanding of Group Policy Objects (GPOs) and Conditional Access policies Research and resolves systemic issues and problems with software and hardware on Windows and Mac systems. Follows escalation procedures when appropriate to resolve processing problems and user problems in a timely manner and meet service levels and other standards for the job Collaborate with the Service Desk and other L1 teams to identify systemic issues and coordinate investigation and solution implementation. Completes project assignments and ad-hoc project needs commensurate with job expectations. Basic Qualifications: Bachelor’s degree and 2 years of Information Systems experience OR Associate’s degree and 4 years of Information Systems experience OR High school diploma / GED and 8 years of Information Systems experience Preferred Qualifications: 4+ years providing end-user support in a multi-system environment including issue resolution, upgrades/patching, and general management across PC, Mac, Tablet, Smartphones, VDIs and peripherals Working knowledge of MS Office Suite and Browser management required. PowerShell, python or other scripting tools would be very helpful 2+ years working with Intune, JAMF, ServiceNow, and NextThink or 1e Tachyon. Working knowledge of Agile methodology Ability to address rapidly changing priorities in a fast-paced environment Familiar with ITIL-based processes and the use of ServiceNow or similar management platform Excellent communication, interpersonal skills, and writing skills with ability to understand customer needs Passionate about customer service and how it can transform businesses Excellent project management skills and ability to multitask with ease Candidate should be ready to relocate to Kochi Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies