Jobs
Interviews

Applied Computing Technologies Ltd

2 Job openings at Applied Computing Technologies Ltd
Full Stack Software Engineer - Forward Deployed INDIA karnataka 0 years INR Not disclosed On-site Part Time

Description We’re building Orbital , an industrial AI system that lives inside refineries and upstream assets, serving real-time insights to operators, technologists, and engineers. As a Forward Deployed Full Stack Engineer , you’ll make Orbital usable on the ground: building interfaces, APIs, and integration layers that bring AI outputs directly into operational workflows. This isn’t a typical web dev role. You’ll work across back-end services, APIs, and industrial integrations , while also shaping front-end interfaces that can survive operator control rooms and engineering workflows. You’ll be customer-facing: working directly with site teams, adapting features in real time, and making sure the system sticks in production. You won’t just productionise models, you’ll install Orbital on customer sites , integrate with live historian and process data pipelines, and ensure the system runs inside customer IT/OT networks. Core Responsibilities Application Development Build and maintain front-end dashboards and interfaces for refinery operators, technologists, and engineers. Develop back-end APIs and services that integrate Orbital’s AI outputs into customer systems. Ensure applications are secure, reliable, and performant in both cloud and on-prem environments. Microservices & Integration Develop services as containerised microservices , orchestrated in Kubernetes/EKS. Connect front-end and back-end layers with message brokers (Kafka, RabbitMQ) and API gateways. Integrate with industrial data sources (historians, LIMS, OPC UA, IoT feeds). Forward Deployment & Customer Adaptation Deploy full-stack applications in customer on-premise networks. Work with process engineers, IT/OT, and operations teams to customise UI/UX for their workflows. Debug and iterate features live in the field, ensuring adoption and usability. Software Engineering Best Practices Write clean, modular, and testable code across front-end and back-end. Set up CI/CD pipelines for fast iteration and deployment. Collaborate closely with product owners and ML engineers to align UI with model capabilities. Adapt UX to site-specific workflows (control room, process engineering, production tech teams). Collaborate with ML Engineers to surface inference + RCA results in usable, real-time dashboards. Customer Facing Deploy AI microservices in customer on-prem (often air-gapped or tightly firewalled) / our cloud clusters. Connect Orbital pipelines to customer historians, OPC UA servers, IoT feeds, and unstructured data sources . Build data ingestion flows tailored to each site, ensuring schema, tagging, and drift handling are robust. Work with customer IT/OT to manage network, security, and performance constraints. Requirements Strong proficiency in JavaScript/TypeScript (React, Node.js) and back-end frameworks (FastAPI, Express, Django). Solid working knowledge of Python for scripting, APIs, and data integration. Experience building containerised microservices (Docker, Kubernetes/EKS). Familiarity with message brokers (Kafka, RabbitMQ). Proficiency with Linux environments (deployment, debugging, performance tuning). Hands-on experience with AWS (EKS, S3, IAM, CloudWatch, etc.) . Bonus: exposure to time-series/industrial data and operator-facing dashboards. Comfort working in forward-deployed, on-premise customer environments . What Success Looks Like Orbital has user-facing dashboards and APIs that operators actually use in production. Applications run reliably in customer on-premise environments . Features are rapidly iterated based on live user feedback from engineers in the field. Full-stack code integrates seamlessly with Orbital’s AI/ML microservices.

ML Engineer - Forward Deployed INDIA karnataka 0 years INR Not disclosed On-site Part Time

Description We’re building Orbital , an industrial AI system that runs live in refineries and upstream assets, ingesting sensor data, running deep learning + physics hybrid models, and serving insights in real time. As a Forward Deployed ML Engineer , you’ll sit at the intersection of research and deployment: turning notebooks into containerised microservices, wiring up ML inference pipelines, and making sure they run reliably in demanding industrial environments. This role is not just about training models. You’ll write PyTorch code when needed, package models into Docker containers, design message-brokered microservice architectures, and deploy them in hybrid on-prem/cloud setups . You’ll also be customer-facing: working with process engineers and operators to integrate Orbital into their workflows. Core Responsibilities Model Integration & Engineering Take research models (time-series, deep learning, physics-informed) and productionise them in PyTorch . Wrap models into containerised services (Docker/Kubernetes) with clear APIs. Optimise inference pipelines for latency, throughput, and reliability. Microservices & Messaging Design and implement ML pipelines as multi-container microservices . Use message brokers (Kafka, RabbitMQ, etc.) to orchestrate data flow between services. Ensure pipelines are fault-tolerant and scalable across environments. Forward Deployment & Customer Integration Deploy AI services into customer on-prem environments (industrial networks, restricted clouds). Work with customer IT/OT teams to integrate with historians, OPC UA servers, and real-time data feeds. Debug, monitor, and tune systems in the field — ensuring AI services survive messy real-world data. Software Engineering Best Practices Maintain clean, testable, container-ready codebases. Implement CI/CD pipelines for model deployment and updates. Work closely with product and data engineering teams to align system interfaces. Requirements MSc in Computer Science, Machine Learning, Data Science, or related field , or equivalent practical experience. Strong proficiency in Python and deep learning frameworks (PyTorch preferred) . Solid software engineering background — designing and debugging distributed systems. Experience building and running Dockerised microservices , ideally with Kubernetes/EKS. Familiarity with message brokers (Kafka, RabbitMQ, or similar). Comfort working in hybrid cloud/on-prem deployments (AWS, Databricks, or industrial environments). Exposure to time-series or industrial data (historians, IoT, SCADA/DCS logs) is a plus. Ability to work in forward-deployed settings , collaborating directly with customers. What Success Looks Like Research models are hardened into fast, reliable services that run in production. Customers see Orbital AI running live in their environment without downtime. Microservice-based ML pipelines scale cleanly, with message broking between components. You become the go-to engineer bridging AI research, product, and customer integration.