We're looking for a passionate Machine Learning Engineer with a strong foundation in computer vision, model training and deployment, observability pipelines, and DevOps for AI systems. You’ll play a pivotal role in building scalable, accurate, and production-ready ML systems in a collaborative and distributed environment. Responsibilities Train, fine-tune, and evaluate object detection and OCR models (YOLO, GroundingDINO, PP-OCR, TensorFlow). Build and manage observability pipelines for evaluating model performance in production (accuracy tracking, drift analysis). Develop Python-based microservices and asynchronous APIs for ML model serving and orchestration. Package and deploy ML services using Docker and Kubernetes across distributed environments. Implement and manage distributed computing workflows with NATS messaging. Collaborate with DevOps to configure networking (VLANs, ingress rules, reverse proxies) and firewall access for scalable deployment. Use Bash scripting and Linux CLI tools (e.g., sed, awk) for automation and log parsing. Design modular, testable Python code using OOP and software packaging principles. Work with PostgreSQL, MongoDB, and TinyDB for structured and semi-structured data ingestion and persistence. Manage system processes using Python concurrency primitives (threading, multiprocessing, semaphores, etc.). Required Skills Languages: Python (OOP, async IO, modularity), Bash Computer Vision: OpenCV, Label Studio, YOLO, GroundingDINO, PP-OCR ML & MLOps: Training pipelines, evaluation metrics, observability tooling DevOps & Infra: Docker, Kubernetes, NATS, ingress & firewall configs Data: PostgreSQL, MongoDB, TinyDB Networking: Subnetting, VLANs, service access, reverse proxy setup Tools: sed, awk, firewalls, reverse proxies, Linux process control
Looking for a MLOps & Computer Vision Engineer to join our team. ✨ What you’ll work on: Building & fine-tuning object detection (YOLO, TensorFlow, GroundingDINO) and OCR models (PP-OCR). Creating observability pipelines to track accuracy as part of the MLOps lifecycle. Deploying distributed systems with NATS & Kubernetes . Containerizing ML workloads with Docker for training & inference. Crafting robust Python applications (async coroutines, OOP, modular design, basic DSA). Managing PostgreSQL, MongoDB, TinyDB databases. Automating with Bash, sed, awk, and handling networking (VLANs, subnets, ingress rules, reverse proxies). 🛠 Tech stack highlights: Python, FastAPI, OpenCV, Kubernetes, Docker, NATS, Bash, PostgreSQL, MongoDB. 🌟 Experience: ~3 years in full-stack ML systems, strong ownership & problem-solving mindset.
We are looking to onboard a candidate who can support our team in conducting in-depth code reviews for AI/ML models developed using open-source frameworks. The candidate should work closely with our team, reviewing their code regularly, improving code quality, and advising on best practices for model development and deployment. We need an onsite candidate in Hyderabad with the following skills: Strong proficiency in Python Expertise in Machine Learning and Deep Learning Hands-on experience with TensorFlow, PyTorch, and other open-source Al libraries Familiarity with model explainability, fairness, and bias detection Experience in reviewing and optimizing model pipelines and training loops Solid understanding of MLOps practices, including CI/CD for ML Exposure to distributed model training and GPU including CI/CD for ML Exposure to distributed model training and GPU optimization Experience with unit testing and performance profiling for Al models Knowledge of model versioning, reproducibility, and experiment tracking (e.g., using MLflow, Weights & Biases) Familiarity with data validation, feature engineering pipelines, and data drift detection Ability to suggest improvements on architecture, modularization, and documentation of code The ideal candidate would have a proven track record of working on production-grade Al systems and would be capable of mentoring our team to elevate our engineering standards.
We are looking for an experienced Python / Full Stack Developer to join our engineering team and help us build AI-driven Web applications. (4+ Years Experience) Key Responsibilities • Develop and maintain scalable Python applications using test-driven development (TDD). • Integrate AI/ML models into full stack applications. • Build and optimize React-based Web applications with modern UI practices. • Debug and troubleshoot issues across the stack (frontend, backend, networking). • Design, virtualize, and containerize applications/services for deployment. • Collaborate within agile workflows, adapting to dynamic requirements. • Communicate effectively and work closely with cross-functional teams. Requirements 1. Python Backend Development – Strong base in engineering practices, unit testing (e.g., pytest, unittest), FastAPI, BCrypt, NumPy, and AI model deployment (CUDA integration is a plus). 2. Frontend Development – Hands-on experience in React, component-driven development, and state management (Redux or similar). 3. Databases – Experience with PostgreSQL including schema optimization and query performance. 4. DevOps & Tools – Familiarity with Docker (Kubernetes knowledge is a plus), Git workflows, and VS Code. 5. System Design – Understanding of high-level design (HLD) concepts at a base level. 6. Soft Skills – Strong communication skills, teamwork, and ability to thrive in agile environments. Good to Have • Experience in deploying AI models into production. • Knowledge of distributed systems and scaling full stack applications. • Contributions to open-source projects or personal AI/React projects.
We are seeking a Principal Product Manager to lead the product strategy, roadmap, and execution, driving alignment with business needs, technology capabilities, and enterprise cost optimisation goals. This role requires a blend of product leadership, technical expertise in data platforms, FinOps knowledge, and stakeholder management. Key Responsibilities Product Strategy & Vision: Define and evolve the product vision, strategy, and roadmap. Drive future capabilities including SLA monitoring, cross-charging frameworks, AI-driven insights, and XOPS integration. Execution & Delivery: Translate vision into actionable epics, features, and user stories within Agile/SAFe frameworks. Lead prioritization of features across multiple regions and stakeholders. Partner with Solution Architects, Program Managers, and Engineering to ensure timely and high-quality delivery. Stakeholder Engagement: Collaborate with senior leadership and product owners to drive adoption and investment. Act as the primary voice for execution teams while aligning with global product leadership. FinOps & Cost Optimization: Drive cost transparency and cross-charge models for data platforms. Identify savings opportunities such as cluster right-sizing, reserved instances, and shared services. Product Growth & Insights: Own and refine KPIs (e.g., SLA compliance %, cost per data pipeline, user growth, sector adoption rate). Leverage telemetry and analytics to improve product adoption and performance. Team & Culture: Mentor product managers and business analysts within regional and global teams. Foster a customer-centric, outcome-driven culture across delivery teams. Required Qualifications 12+ years of experience in Product Management, with at least 5 years in a Principal / Senior PM capacity. Proven experience in data platforms, observability, or FinOps products. Strong understanding of cloud platforms (Azure, Databricks, ADF, ElasticSearch). Demonstrated ability to operate in Agile/SAFe environments, managing complex backlogs across geographies. Strong analytical, financial, and business acumen to manage cost optimization and ROI tracking. Exceptional stakeholder management and communication skills, with executive presence. Preferred Qualifications Experience with enterprise-scale product rollouts across multiple regions. Knowledge of AI/ML, agentic solutions, and telemetry frameworks. Prior experience in data-driven enterprise environments. MBA or equivalent in Product/Strategy Management (preferred, not mandatory). What We Offer Opportunity to lead a flagship data product with global adoption. A collaborative and innovative work culture with international exposure. Flexible employment model (full-time or contract). Competitive compensation aligned with experience and impact.
The Role We're looking for a Senior Graphics & Generation Engineer to evolve how we transform text into visuals. You'll work at the intersection of AI and graphics, specialising in idea, data, and flow visualisation systems that generate numerous types of professional graphics. What You'll Do - Scale our text-to-graphics pipeline for millions of users- Turn subjective feature targets into measurable results, optimising for visual quality and user satisfaction- Design intelligent layout algorithms for idea, data, and flow visualisations- Build template systems for dynamic SVG generation, manipulation, and rendering What We're Looking For - 5+ years in Python development with graphics programming experience- Expertise in SVG generation, rendering, and optimisation- Background in data visualisation and flow diagram generation- Ability to quantify and measure subjective quality improvements
The Role We're seeking a Senior Frontend Engineer to architect and scale our browser-based graphics editor. You'll tackle performance challenges, complex visual editing systems, and push the boundaries of what's possible with in-browser graphics manipulation. What You'll Do - Design and implement high-performance visual editing features in the browser- Optimize rendering, caching, and memory management for smooth graphics interactions- Build sophisticated state management for complex visual compositions and speed- Collaborate with AI and generation teams to seamlessly integrate new visual features- Work with talented designers and like pixel-perfect results What We're Looking For - 5+ years building complex, performant web applications- Deep expertise in browser optimization and graphics rendering- Experience with visual editors, canvas/SVG manipulation, or design tools- Self-directed problem solver who thrives in remote environments