ML Engineer - Forward Deployed INDIA

0 years

0 Lacs

Posted:1 week ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

Description

We’re building Orbital, an industrial AI system that runs live in refineries and upstream assets, ingesting sensor data, running deep learning + physics hybrid models, and serving insights in real time.
As a Forward Deployed ML Engineer, you’ll sit at the intersection of research and deployment: turning notebooks into containerised microservices, wiring up ML inference pipelines, and making sure they run reliably in demanding industrial environments.

This role is not just about training models. You’ll write PyTorch code when needed, package models into Docker containers, design message-brokered microservice architectures, and deploy them in hybrid on-prem/cloud setups. You’ll also be customer-facing: working with process engineers and operators to integrate Orbital into their workflows.

Core Responsibilities
Model Integration & Engineering

  • Take research models (time-series, deep learning, physics-informed) and productionise them in PyTorch.
  • Wrap models into containerised services (Docker/Kubernetes) with clear APIs.
  • Optimise inference pipelines for latency, throughput, and reliability.
  • Microservices & Messaging
  • Design and implement ML pipelines as multi-container microservices.
  • Use message brokers (Kafka, RabbitMQ, etc.) to orchestrate data flow between services.
  • Ensure pipelines are fault-tolerant and scalable across environments. Forward Deployment & Customer Integration
  • Deploy AI services into customer on-prem environments (industrial networks, restricted clouds).
  • Work with customer IT/OT teams to integrate with historians, OPC UA servers, and real-time data feeds.
  • Debug, monitor, and tune systems in the field — ensuring AI services survive messy real-world data.


Software Engineering Best Practices

  • Maintain clean, testable, container-ready codebases.
  • Implement CI/CD pipelines for model deployment and updates.
  • Work closely with product and data engineering teams to align system interfaces.

Requirements

  • MSc in Computer Science, Machine Learning, Data Science, or related field, or equivalent practical experience.
  • Strong proficiency in Python and deep learning frameworks (PyTorch preferred).
  • Solid software engineering background — designing and debugging distributed systems.
  • Experience building and running Dockerised microservices, ideally with Kubernetes/EKS.
  • Familiarity with message brokers (Kafka, RabbitMQ, or similar).
  • Comfort working in hybrid cloud/on-prem deployments (AWS, Databricks, or industrial environments).
  • Exposure to time-series or industrial data (historians, IoT, SCADA/DCS logs) is a plus.
  • Ability to work in forward-deployed settings, collaborating directly with customers.

What Success Looks Like

  • Research models are hardened into fast, reliable services that run in production.
  • Customers see Orbital AI running live in their environment without downtime.
  • Microservice-based ML pipelines scale cleanly, with message broking between components.
  • You become the go-to engineer bridging AI research, product, and customer integration.

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You