Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a highly skilled and visionary Senior Embedded Systems Architect to lead the design and implementation of next-generation AI-powered embedded platforms. This role demands deep technical proficiency across embedded systems, AI model deployment, hardware–software co-design, and media-centric inference pipelines. You will architect full-stack embedded AI solutions using custom AI accelerators such as Google Coral (Edge TPU), Hailo, BlackHole (Torrent), and Kendryte, delivering real-time performance in vision, audio, and multi-sensor edge deployments. The ideal candidate brings a combination of system-level thinking, hands-on prototyping, and experience in optimizing AI workloads for edge inference. This is a high-impact role where you will influence product architecture, ML tooling, hardware integration, and platform scalability for a range of IoT and intelligent device applications. Requirements Key Responsibilities ️ System Architecture & Design Define and architect complete embedded systems for AI workloads — from sensor acquisition to real-time inference and actuation . Design multi-stage pipelines for vision/audio inference: e.g., ISP preprocessing CNN inference postprocessing. Evaluate and benchmark hardware platforms with AI accelerators (TPU/NPU/DSP) for latency, power, and throughput. Edge AI & Accelerator Integration Work with Coral, Hailo, Kendryte, Movidius, and Torrent accelerators using their native SDKs (EdgeTPU Compiler, HailoRT, etc.). Translate ML models (TensorFlow, PyTorch, ONNX) for inference on edge devices using cross-compilation , quantization , and toolchain optimization . Lead efforts in compiler flows such as TVM, XLA, Glow, and custom runtime engines. ️ Media & Sensor Processing Pipelines Architect pipelines involving camera input , ISP tuning , video codecs , audio preprocessors , or sensor fusion stacks . Integrate media frameworks such as V4L2 , GStreamer , and OpenCV into real-time embedded systems. Optimize for frame latency, buffering, memory reuse, and bandwidth constraints in edge deployments. ️ Embedded Firmware & Platform Leadership Lead board bring-up, firmware development (RTOS/Linux), peripheral interface integration, and low-power system design. Work with engineers across embedded, AI/ML, and cloud to build robust, secure, and production-ready systems. Review schematics and assist with hardware–software trade-offs, especially around compute, thermal, and memory design. Required Qualifications Education: BE/B.Tech/M.Tech in Electronics, Electrical, Computer Engineering, Embedded Systems, or related fields. Experience: Minimum 5+ years of experience in embedded systems design. Minimum 3 years of hands-on experience with AI accelerators and ML model deployment at the edge. Technical Skills Required Embedded System Design Strong C/C++, embedded Linux, and RTOS-based development experience. Experience with SoCs and MCUs such as STM32, ESP32, NXP, RK3566/3588, TI Sitara, etc. Cross-architecture familiarity: ARM Cortex-A/M, RISC-V, DSP cores. ML & Accelerator Toolchains Proficiency with ML compilers and deployment toolchains: ONNX, TFLite, HailoRT, EdgeTPU compiler, TVM, XLA . Experience with quantization , model pruning , compiler graphs , and hardware-aware profiling . Media & Peripherals Integration experience with camera modules , audio codecs , IMUs , and other digital/analog sensors . Experience with V4L2 , GStreamer , OpenCV , MIPI CSI , and ISP tuning is highly desirable. System Optimization Deep understanding of compute budgeting , thermal constraints , memory management , DMA , and low-latency pipelines . Familiarity with debugging tools: JTAG , SWD , logic analyzers , oscilloscopes , perf counters , and profiling tools. Preferred (Bonus) Skills Experience with Secure Boot , TPM , Encrypted Model Execution , or Post-Quantum Cryptography (PQC) . Familiarity with safety standards like IEC 61508 , ISO 26262 , UL 60730 . Contributions to open-source ML frameworks or embedded model inference libraries. Why Join Us? At EURTH TECHTRONICS PVT LTD , you won't just be optimizing firmware — you will architect full-stack intelligent systems that push the boundary of what's possible in embedded AI. Work on production-grade, AI-powered devices for industrial, consumer, defense, and medical applications . Collaborate with a high-performance R&D team that builds edge-first, low-power, secure, and scalable systems . Drive core architecture and set the technology direction for a fast-growing, innovation-focused organization. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 1 month ago
2.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are looking for a hands-on and technically proficient Embedded Software Team Lead to drive the development of intelligent edge systems that combine embedded firmware, machine learning inference, and hardware acceleration. This role is perfect for someone who thrives at the intersection of real-time firmware design, AI model deployment, and hardware-software co-optimization. You will lead a team delivering modular, scalable, and efficient firmware pipelines that run quantized ML models on accelerators like Hailo, Coral, Torrent (BlackHole), Kendryte, and other emerging chipsets. Your focus will include model runtime integration, low-latency sensor processing, OTA-ready firmware stacks, and CI/CD pipelines for embedded products at scale Requirements Key Responsibilities Technical Leadership & Planning Own the firmware lifecycle across multiple AI-based embedded product lines. Define system and software architecture in collaboration with hardware, ML, and cloud teams. Lead sprint planning, code reviews, performance debugging, and mentor junior engineers. ️ ML Model Deployment & Runtime Integration Collaborate with ML engineers to port, quantize, and deploy models using TFLite , ONNX , or HailoRT . Build runtime pipelines that connect model inference with real-time sensor data (vision, IMU, acoustic). Optimize memory and compute flows for edge model execution under power/bandwidth constraints. Firmware Development & Validation Build production-grade embedded stacks using RTOS (FreeRTOS/Zephyr) or embedded Linux . Implement secure bootloaders, OTA update mechanisms, and encrypted firmware interfaces. Interface with a variety of peripherals including cameras, IMUs, analog sensors, and radios (BLE/Wi-Fi/LoRa). ️ CI/CD, DevOps & Tooling for Embedded Set up and manage CI/CD pipelines for firmware builds, static analysis, and validation. Integrate Docker-based toolchains, hardware-in-loop (HIL) testing setups, and simulators/emulators. Ensure codebase quality, maintainability, and test coverage across the embedded stack. Required Qualifications Education: BE/B.Tech/M.Tech in Embedded Systems, Electronics, Computer Engineering, or related fields. Experience: Minimum 4+ years of embedded systems experience. Minimum 2 years in a technical lead or architect role. Hands-on experience in ML model runtime optimization and embedded system integration. Technical Skills Required Embedded Development & Tools Expert-level C/C++ , hands-on with RTOS and Yocto-based Linux . Proficient with toolchains like GCC/Clang, OpenOCD, JTAG/SWD, Logic Analyzers. Familiarity with OTA , bootloaders , and memory management (heap/stack analysis, linker scripts). ML Model Integration Proficiency in TFLite , ONNX Runtime , HailoRT , or EdgeTPU runtimes . Experience with model conversion, quantization (INT8, FP16), runtime optimization. Ability to read/modify model graphs and connect to inference APIs. Connectivity & Peripherals Working knowledge of BLE, Wi-Fi, LoRa, RS485 , USB, and CAN protocols. Integration of camera modules , MIPI CSI , IMUs , and custom analog sensors . ️ DevOps for Embedded Hands-on with GitLab/GitHub CI, Docker, and containerized embedded builds. Build system expertise: CMake , Make , Bazel , or Yocto preferred. Experience in automated firmware testing (HIL, unit, integration). Preferred (Bonus) Skills Familiarity with machine vision pipelines , ISP tuning , or video/audio codec integration . Prior work on battery-operated devices , energy-aware scheduling , or deep sleep optimization . Contributions to embedded ML open-source projects or model deployment tools. Why Join Us? At EURTH TECHTRONICS PVT LTD , we go beyond firmware—we’re designing and deploying embedded intelligence on every device, from industrial gateways to smart consumer wearables. Build and lead teams working on cutting-edge real-time firmware + ML integration . Work on full-stack embedded ML systems using the latest AI accelerators and embedded chipsets . Drive product-ready, scalable software platforms that power IoT, defense, medical , and consumer electronics . How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 1 month ago
0.0 - 4.0 years
0 Lacs
Hyderabad, Telangana
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a passionate and skilled Embedded ML Engineer to work on cutting-edge ML inference pipelines for low-power, real-time embedded platforms. You will help design and deploy highly efficient ML models on custom hardware accelerators like Hailo, Coral (Edge TPU), Kendryte K210, and Torrent/BlackHole in real-world IoT systems. This role combines model optimization, embedded firmware development, and toolchain management. You will be responsible for translating large ML models into efficient quantized versions, benchmarking them on custom hardware, and integrating them with embedded firmware pipelines that interact with real-world sensors and peripherals. Requirements Key Responsibilities ML Model Optimization & Conversion Convert, quantize, and compile models built in TensorFlow, PyTorch , or ONNX to hardware-specific formats. Work with compilers and deployment frameworks like TFLite , HailoRT , EdgeTPU Compiler , TVM , or ONNX Runtime . Use techniques such as post-training quantization , pruning , distillation , and model slicing . ️ Embedded Integration & Inference Deployment Integrate ML runtimes in C/C++ or Python into firmware stacks built on RTOS or embedded Linux . Handle real-time sensor inputs (camera, accelerometer, microphone) and pass them through inference engines. Manage memory, DMA transfers, inference buffers, and timing loops for deterministic behavior. Benchmarking & Performance Tuning Profile and optimize models for latency, memory usage, compute load , and power draw . Work with runtime logs, inference profilers, and vendor SDKs to squeeze maximum throughput on edge hardware. Conduct accuracy vs performance trade-off studies for different model variants. Testing & Validation Design unit, integration, and hardware-in-loop (HIL) tests to validate model execution on actual devices. Collaborate with hardware and firmware teams to debug runtime crashes, inference failures, and edge cases. Build reproducible benchmarking scripts and test data pipelines. Required Qualifications Education: BE/B.Tech/M.Tech in Electronics, Embedded Systems, Computer Science, or related disciplines. Experience: 2–4 years in embedded ML, edge AI, or firmware development with ML inference integration. Technical Skills Required Embedded Firmware & Runtime Strong experience in C/C++ , basic Python scripting. Experience with RTOS (FreeRTOS, Zephyr) or embedded Linux. Understanding of memory-mapped I/O, ring buffers, circular queues, and real-time execution cycles. ML Model Toolchains Experience with TensorFlow Lite , ONNX Runtime , HailoRT , EdgeTPU , uTensor , or TinyML . Knowledge of quantization-aware training or post-training quantization techniques. Familiarity with model conversion pipelines and hardware-aware model profiling. Media & Sensor Stack Ability to work with input/output streams from cameras , IMUs , microphones , etc. Experience integrating inference with V4L2, GStreamer, or custom ISP preprocessors is a plus. Tooling & Debugging Git, Docker, cross-compilation toolchains (Yocto, CMake). Debugging with SWD/JTAG, GDB, or serial console-based logging. Profiling with memory maps, timing charts, and inference logs. Preferred (Bonus) Skills Previous work with low-power vision devices , audio keyword spotting , or sensor fusion ML . Familiarity with edge security (encrypted models, secure firmware pipelines). Hands-on with simulators/emulators for ML testing (Edge Impulse, Hailo’s HEF emulator, etc.). Participation in TinyML forums , open-source ML toolkits, or ML benchmarking communities. Why Join Us? At EURTH TECHTRONICS PVT LTD , we're not just building IoT firmware—we're deploying machine learning intelligence on ultra-constrained edge platforms , powering real-time decisions at the edge. Get exposure to full-stack embedded ML pipelines — from model quantization to runtime integration. Work with a world-class team focused on ML efficiency, power optimization, and embedded system scalability .️ Contribute to mission-critical products used in industrial automation, medical wearables, smart infrastructure , and more. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 1 month ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re a fast-moving, early-stage startup working at the cutting edge of AI to solve real-world problems in the future of work space. As part of the broader AI ecosystem, we’re not just following trends—we’re building the next wave of vertical AI infrastructure. We’re seeking a driven Senior Backend Engineer to join our team. This role is perfect for an engineer who thrives in chaos, ships confidently, and can own DevOps, backend infra, and AI-powered feature delivery like a pro. You must be available to start in 1 to 2 weeks! What You’ll Own: Architect, scale, and secure our platform on AWS (EC2, Lambda, RDS, etc.). Automate deployments, logging, monitoring, and backups. Optimize and expand our FastAPI backend and Next.js platform to support new workflows, smart inference, and user-triggered pipelines. Troubleshoot issues across AI APIs, improve prompt strategy, and guide the integration of ML models and data-driven components into the backend. Implement security best practices across APIs, databases, auth flows, and user data. Be the grown-up in the room when it comes to system design. Unblock the team. Ship high-impact features weekly. Handle what the full-stack lead can’t get to. Be the difference between “2 weeks” and “2 days.” You’re a Fit If You: Have 7+ years of experience in backend/platform/devops roles, ideally within a startup or SaaS environment Are fluent in AWS, FastAPI, and CI/CD pipelines Have built and scaled APIs in production and know how to handle rate limits, timeouts, retries, and error handling Can debug and optimize AI-driven workflows, APIs, and prompt-based interfaces Understand data security, auth, encryption, and compliance Enjoy working with founders directly and thrive in high-ownership, low-structure environments Bonus: Experience with Hugging Face, Pandas, Supabase, Postgres , or building AI-first apps
Posted 1 month ago
0 years
0 Lacs
India
On-site
Company Description Triple I is a leading provider of AI-powered tools for ESG reporting automation. The company offers solutions that handle the entire ESG process, from real-time data integration to audit-ready reports aligned with industry regulations. Trusted by teams across various industries, Triple I simplifies ESG reporting to help enterprises move faster, stay compliant, and reduce workloads. Role Description We’re looking for a skilled AI Engineer to build a powerful AI-driven system that can analyze, transform, and standardize raw datasets into a predefined destination schema — with full language normalization, schema mapping, and intelligent data validation. This role is perfect for someone with deep expertise in data pipelines, NLP, and intelligent schema inference who thrives on creating scalable, adaptable solutions that go far beyond hardcoded logic. What You’ll Be Doing Develop a generalizable AI algorithm that transforms raw, unstructured (or semi-structured) source datasets into a standardized schema Automate schema mapping, data enrichment, PK/FK handling, language translation, and duplicate detection Build logic to flag unresolved data, generate an UnresolvedData_Report, and explain confidence or failure reasons Ensure all outputs are generated in English only, regardless of input language Experiment with 2–3 AI/ML approaches (e.g. NLP models, rule-based logic, transformers, clustering) and document tradeoffs Deliver all outputs (destination tables) in clean, validated formats (CSV/XLSX) Maintain detailed documentation of preprocessing, validation, and accuracy logic Key Responsibilities Design AI logic to dynamically extract, map, and organize data into 10+ destination tables Handle primary key/foreign key relationships across interconnected tables Apply GHG Protocol logic to assign Scope 1, 2, or 3 emissions automatically based on activity type Build multilingual support: auto-translate non-English input and ensure destination is 100% English Handle duplicate and conflicting records with intelligent merging or flagging logic Generate automated validation logs for transparency and edge case handling
Posted 1 month ago
0 years
0 Lacs
Delhi, India
On-site
We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI, building scalable models and intelligent systems that power our next-generation sports tech platform Responsibilities Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4 etc. ) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Requirements Strong command of Python and familiarity with C/C++ Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e. g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems An undergraduate degree (Master's or PhD preferred) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. This job was posted by Siddhartha Dutta from Tech At Play.
Posted 1 month ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
As a Senior Machine Learning Engineer, you will be responsible for designing, developing, and deploying cutting-edge models for end-to-end content generation, including AI-driven image/video generation, lip syncing, and multimodal AI systems. You will work on the latest advancements in deep generative modeling to create highly realistic and controllable AI-generated media. Responsibilities Research and Develop: Design and implement state-of-the-art generative models, including Diffusion Models, 3D VAEs, and GANs for AI-powered media synthesis. End-to-End Content Generation: Build and optimize AI pipelines for high-fidelity image/video generation and lip syncing using diffusion and autoencoder models. Speech and Video Synchronization: Develop advanced lip-syncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Real-Time AI Systems: Implement and optimize models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques. Scaling and Production Deployment: Work closely with software engineers to deploy models efficiently on cloud-based architectures (AWS, GCP, or Azure). Collaboration and Research: Stay ahead of the latest trends in deep generative models, diffusion models, and transformer-based vision systems to enhance AI-generated content quality. Experimentation and Validation: Design and conduct experiments to evaluate model performance, improve fidelity, realism, and computational efficiency, and refine model architectures. Code Quality and Best Practices: Participate in code reviews, improve model efficiency, and document research findings to enhance team knowledge-sharing and product development. Requirements Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field. 3+ years of experience working with deep generative models, including Diffusion Models, 3D VAEs, GANs, and autoregressive models. Strong proficiency in Python and deep learning frameworks such as PyTorch. Expertise inmulti-modal AI, text-to-image, and image-to-video generation, audio to lipsync Strong understanding of machine learning principles and statistical methods. Good to have experience in real-time inference optimization, cloud deployment, and distributed training. Strong problem-solving abilities and a research-oriented mindset to stay updated with the latest AI advancements. Familiarity with generative adversarial techniques, reinforcement learning for generative models, and large-scale AI model training. Preferred Qualifications Experience with transformers and vision-language models(e. g., CLIP, BLIP, GPT-4V). Background in text-to-video generation, lip-sync generation, and real-time synthetic media applications. Experience in cloud-based AI pipelines (AWS, Google Cloud, or Azure) and model compression techniques (quantization, pruning, distillation). Contributions to open-source projects or published research in AI-generated content, speech synthesis, or video synthesis. This job was posted by Meghna Sidda from TrueFan.
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Ship Micro-services - Build FastAPI services that handle 800 req/s today and will triple within a year (sub-200 ms p95). Power Real-Time Learning - Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily. Design for Scale & Safety - Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch. Deploy Globally - Roll out Dockerised services behind NGINX on AWS (EC2 S3 SQS) and GCP (GKE) via Kubernetes. Automate Releases - GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week. Own Reliability - Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend. Expose Gen-AI at Scale - Publish LLM inference and vector-search endpoints in partnership with the AI team. Ship Fast, Learn Fast - Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in Requirements 2+ yrs Python back-end experience (FastAPI / Flask). Strong with Docker and container orchestration basics. Hands-on with GitLab CI/CD, AWS (EC2 S3 SQS), or GCP (GKE / Compute) in production. SQL/NoSQL (Postgres, MongoDB) + You've built systems from scratch and have solid system-design fundamentals. k8s at scale, Terraform. Experience with AI/ML inference services (LLMs, vector DBs). Go / Rust for high-perf services. Observability: Prometheus, Grafana, OpenTelemetry. This job was posted by Rimjhim Tripathi from CareerNinja.
Posted 1 month ago
5.0 years
50 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
5.0 years
0 Lacs
Delhi, India
On-site
About This Role As a Staff AI Engineer you will get to play with petabyte data gathered from a multitude of data sources including Balbix proprietary sensors and 3rd party threat feeds. You will leverage a variety of AI techniques including deep learning, probabilistic graphical models, graph learning, recommendation systems, reinforcement learning, NLP, etc. And of course, you will be part of a team building a world-class product addressing one of the grand challenges in the technology industry. DATA SCIENCE AT BALBIX At Balbix we believe in using the right algorithms and tools to ensure correctness, performance and deliver an excellent user experience. We draw boldly from the latest in AI/ML research but are unafraid to go beyond bayesian inference and statistical models if the situation demands it. We are generalists, caring as much about storytelling with data, as about bleeding edge techniques, scalable model training and deployment. We are building a data science culture with equal emphasis on knowing our data, grokking security first principles, caring about customer needs, explaining our model predictions, deploying at scale, communicating our work, and adapting the latest advances. We look out for each other, enjoy each others’ company, and keep an open channel of communication about all things data and non-data. You Will Design and develop an ensemble of classical and deep learning algorithms for modeling complex interactions between people, software, infrastructure and policies in an enterprise environment Design and implement algorithms for statistical modeling of enterprise cybersecurity risk Apply data-mining, AI and graph analysis techniques to address a variety of problems including modeling, relevance and recommendation. Build production quality solutions that balance complexity and performance Participate in the engineering life-cycle at Balbix, including designing high quality ML infrastructure and data pipelines, writing production code, conducting code reviews and working alongside our infrastructure and reliability teams Drive the architecture and the usage of open source software library for numerical computation such as TensorFlow, PyTorch, and ScikitLearn You Are Able to take on very complex problems, learn quickly, iterate, and persevere towards a robust solution Product-focused and passionate about building truly usable systems Collaborative and comfortable working across teams including data engineering, front end, product management, and DevOps Responsible and like to take ownership of challenging problems A good communicator, and facilitate teamwork via good documentation practices Comfortable with ambiguity and thrive in designing algorithms for evolving needs Intuitive in using the right type of models to address different product needs Curious about the world and your profession, constant learner You Have A Ph.D./M.S. in Computer Science or Electrical Engineering with hands-on software engineering experience 5+ years of experience in the field of Machine Learning and programming in Python. Expertise in programming concepts and building large scale systems. Knowledge of state-of-the-art algorithms combined with expertise in statistical analysis and modeling. Robust understanding of NLP, Probabilistic Graphical Models, Deep Learning with graphs structures, model explainability, etc. Foundational knowledge of probability, statistics and linear algebra
Posted 1 month ago
0 years
0 Lacs
India
On-site
Who you are You're someone who’s already shipped GenAI stuff—even if it was small: a chatbot, a RAG tool, or an agent prototype. You live in Python, LangChain, LlamaIndex, Hugging Face, and vector DBs like FAISS or Milvus. You know your way around prompts—noisy chains, rerankers, retrievals. You've deployed models or services on Azure/AWS/GCP, wrapped them into FastAPI endpoints, and maybe even wired a bit of terraform/ARM. You’re not building from spreadsheets; you're iterating with real data, debugging hallucinations, and swapping out embeddings in production. You can read blog posts and paper intros, follow new methods like QLoRA, and build on them. You're fine with ambiguity and startup chaos—no strict specs, no roadmap, just a mission. You work in async Slack, ask quick questions, push code that works, and help teammates stay afloat. You're not satisfied with just getting things done—you want GenAI to feel reliable, usable, and maybe even fun. What you’ll actually do You’ll build real GenAI features: agentic chatbots for document lookup, conversation assistants, or knowledge workflows. You’ll design and implement RAG systems: data ingestion, embeddings, vector indexing, retrievals, and prompt pipelines. You’ll write inference APIs in FastAPI that work with vector stores and cloud LLM endpoints. You’ll containerize services with Docker, push to Azure/AWS/GCP, wire basic CI/CD, monitor latency and faulty responses, and iterate fast. You’ll experiment with LoRA/QLoRA fine-tuning on small LLMs, test prompt variants, and measure output quality. You’ll collaborate with DevOps to ensure deployment reliability, QA to make tests more robust, and frontend folks to shape UX. You’ll share your work in quick “demo & dish” sessions: what's working, what's broken, what you're trying next. You’ll tweak embeddings, watch logs, and improve pipelines one experiment at a time. You’ll help write internal docs or “how-tos” so others can reuse your work. Skills and knowledge You have solid experience in Python backend development (FastAPI/Django) Experienced with LLM frameworks: LangChain, LlamaIndex, CrewAI, or similar Comfortable with vector databases: FAISS, Pinecone, Milvus Able to fine-tune models using PEFT/LoRA/QLoRA Knowledge of embeddings, retrieval systems, RAG pipelines, and prompt engineering Familiar with cloud deployment and infra-as-code (Azure, AWS, GCP with Docker/K8s, Terraform/ARM) Good understanding of monitoring and observability—tracking response latency, hallucinations, and costs Able to read current research, try prototypes, and apply them pragmatically Works well in minimal-structure startups; self-driven, team-minded, proactive communicator
Posted 1 month ago
2.0 years
0 Lacs
Delhi, India
On-site
What is Hunch? Hunch is a dating app that helps you land a date without swiping like a junkie. Designed for people tired of mindless swiping and commodified matchmaking, Hunch leverages a powerful AI-engine to help users find meaningful connections by focusing on personality over just looks. With 2M+ downloads and a 4.4-star rating , Hunch is going viral in the US by challenging the swipe-left/right norm of traditional apps. Hunch is a Series A funded ($23 Million) startup building the future of social discovery in a post-AI world. Link to our fundraising announcement Key Offerings Of Hunch Swipe Less, Vibe More: Curated profiles, cutting the clutter of endless swiping. Personality Matters: Opinion-based, belief-based, and thought-based compatibility rather than just focusing on looks. Every Match, Verified: No bots, no catfishing—just real, trustworthy connections Match Scores: Our AI shows compatibility percentages, helping users identify their “100% vibe match.” We're looking for a highly motivated and skilled Data Engineer . You'll design, build, and optimize our robust data infrastructure. You'll also develop scalable data pipelines, ensure data quality, and collaborate closely with our machine learning teams. We're looking for someone passionate about data who thrives in a dynamic environment. If you enjoy tackling complex challenges with cutting-edge technologies, we encourage you to apply. What You'll Do: Architect & Optimize Data Infrastructure: Design, implement, and maintain highly scalable data infrastructure. This includes processes for auto-scaling and easy maintainability of our data pipelines. Develop & Deploy Data Pipelines: Lead the design, implementation, testing, and deployment of resilient data pipelines. These pipelines will ingest, transform, and process large datasets efficiently. Empower ML Workflows: Partner with Machine Learning Engineers to understand their specific data needs. This includes providing high-quality data for model training and ensuring low-latency data delivery for real-time inference. Ensure seamless data flow and efficient integration with ML models. Ensure Data Integrity: Establish and enforce robust systems and processes. These will ensure comprehensive data quality assurance, validation, and reliability across the entire data lifecycle. What You'll Bring: Experience: A minimum of 2+ years of professional experience in data engineering. You should have a proven track record of delivering solutions in a production environment. Data Storage Expertise: Hands-on experience with relational databases (e.g., PostgreSQL, MySQL, Redshift) and cloud object storage (e.g., S3) is required. Experience with distributed file systems (e.g., HDFS) and NoSQL databases is a plus. Big Data Processing: Demonstrated proficiency with big data processing platforms and frameworks. Examples include Hadoop, Spark, Hive, Presto, and Trino. Pipeline Orchestration & Messaging: Practical experience with key data pipeline tools. This includes message queues (e.g., Kafka, Kinesis), workflow orchestrators (e.g., dbt, Airflow), change data capture (e.g., Debezium), and ETL services (e.g., AWS Glue ETL). Programming Prowess: Strong programming skills in Python and SQL are essential. Proficiency in at least one JVM-based language (e.g., Java, Scala) is also required. ML Acumen: A solid understanding of machine learning workflows. This includes data preparation and feature engineering concepts. Innovation & Agility: You should be a creative problem-solver. You'll need a proactive approach to experimenting with new technologies. What we have to offer Competitive financial rewards + annual PLI (Performance Linked Incentives). Meritocracy-driven, candid, and diverse culture. Employee benefits like Medical Insurance One annual all expenses paid by company trip for all employees to bond Although we work from our office in New Delhi, we are flexible in our style and approach Life @Hunch Work Culture: At Hunch we take our work seriously but don’t take ourselves too seriously. Everyone is encouraged to think as owners and not renters, and we prefer to let builders build, empowering people to pursue independent ideas. Impact: Your work will shape the future of social engagement and connect people around the world. Collaboration: Join a diverse team of creative minds and be part of a supportive community. Growth: We invest in your development and provide opportunities for continuous learning. Backed by Global Investors: Hunch is a Series A funded startup, backed by Hashed, AlphaWave, Brevan Howard and Polygon Studios Experienced Leadership: Hunch is founded by a trio of industry veterans - Ish Goel (CEO), Nitika Goel (CTO), and Kartic Rakhra (CMO) - serial entrepreneurs with the last exit from Nexus Mutual, a web3 consumer-tech startup.
Posted 1 month ago
8.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Analytical professional who uses statistical methods, machine learning, and programming skills to extract insights and knowledge from data. Their primary goal is to solve complex business problems, make predictions, and drive strategic decision-making by uncovering patterns and trends within large datasets. In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modeling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. Key Responsibilities: Problem Framing: Collaborating with stakeholders to understand business problems and translate them into data-driven questions. Data Collection and Cleaning: Sourcing, collecting, and cleaning large, often messy, datasets from various sources, preparing them for analysis. Exploratory Data Analysis (EDA): Performing initial investigations on data to discover patterns, spot anomalies, test hypotheses, and check assumptions with the help of summary statistics and graphical representations. Model Development: Building, training, and validating machine learning models (e.g., regression, classification, clustering, deep learning) to predict outcomes or identify relationships. Statistical Analysis: Applying statistical tests and methodologies to draw robust conclusions from data and quantify uncertainty. Feature Engineering: Creating new variables or transforming existing ones to improve model performance and provide deeper insights. Model Deployment: Working with engineering teams to deploy models into production environments, making them operational for real-time predictions or insights. Communication and Storytelling: Presenting complex findings and recommendations clearly and concisely to both technical and non-technical audiences, often through visualizations and narratives. Monitoring and Maintenance: Tracking model performance in production and updating models as data patterns evolve or new data becomes available. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 8 - 10 years of experience as an Data Scientist . Programming Languages: Strong proficiency in Python and/or R, with libraries for data manipulation (e.g., Pandas, dplyr), scientific computing (e.g., NumPy), and machine learning (e.g., Scikit-learn, TensorFlow, PyTorch). Statistics and Probability: A solid understanding of statistical inference, hypothesis testing, probability distributions, and experimental design. Machine Learning: Deep knowledge of various machine learning algorithms, their underlying principles, and when to apply them. Database Querying: Proficiency in SQL for extracting and manipulating data from relational databases. Data Visualization: Ability to create compelling and informative visualizations using tools like Matplotlib, Seaborn, Plotly, or Tableau. Big Data Concepts: Familiarity with concepts and tools for handling large datasets, though often relying on Data Engineers for infrastructure. Domain Knowledge: Understanding of the specific industry or business domain to contextualize data and insights. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Preferred Qualifications Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform sustainability. Requirements Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5-8+ years of experience in AI/ML Engineering, with at least 3 years in applied deep Skills : Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. Benefits Competitive compensation and full-time benefits. Opportunities for certification and professional growth. Flexible work hours and remote work options. Inclusive, innovative, and supportive team culture. (ref:hirist.tech)
Posted 1 month ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Job Description Edge AI Data Scientists will be responsible for designing, developing, and validating machine learning models—particularly in the domain of computer vision—for deployment on edge devices. This role involves working with data from cameras, sensors, and embedded platforms to enable real-time intelligence for applications such as object detection, activity recognition, and visual anomaly detection. The position requires close collaboration with embedded systems and AI engineers to ensure models are lightweight, efficient, and hardware-compatible. Candidate Requirements Education Bachelor's or Master’s degree in Data Science, Computer Science, or a related field. Experience 3+ years of experience in data science or machine learning with a strong focus on computer vision. Experience in developing models for edge deployment and real-time inference. Familiarity with video/image datasets and deep learning model training. Skills Proficiency in Python and libraries such as OpenCV, PyTorch, TensorFlow, and FastAI. Experience with model optimization techniques (quantization, pruning, etc.) for edge devices. Hands-on experience with deployment tools like TensorFlow Lite, ONNX, or OpenVINO. Strong understanding of computer vision techniques (e.g., object detection, segmentation, tracking). Familiarity with edge hardware platforms (e.g., NVIDIA Jetson, ARM Cortex, Google Coral). Experience in processing data from camera feeds or embedded image sensors. Strong problem-solving skills and ability to work collaboratively with cross-functional teams. Your Profile Responsibilities Develop and train computer vision models tailored for constrained edge environments. Analyze camera and sensor data to extract insights and build vision-based ML pipelines. Optimize model architecture and performance for real-time inference on edge hardware. Validate and benchmark model performance on various embedded platforms. Collaborate with embedded engineers to integrate models into real-world hardware setups. Stay up-to-date with state-of-the-art computer vision and Edge AI advancements. Document models, experiments, and deployment configurations. What You Will Love About Working Here· We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital andiCa sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.
Posted 1 month ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
15.0 years
0 Lacs
Delhi
On-site
Overview: The Clinton Health Access Initiative, Inc. (CHAI) is a global health organization committed to the mission of saving lives and reducing the burden of disease in low-and middle-income countries. We work at the invitation of governments to support them and the private sector to create and sustain high-quality health systems. CHAI was founded in 2002 in response to the HIV/AIDS epidemic with the goal of dramatically reducing the price of life-saving drugs and increasing access to these medicines in the countries with the highest burden of the disease. Over the following two decades, CHAI has expanded its focus. Today, along with HIV, we work in conjunction with our partners to prevent and treat infectious diseases such as COVID-19, malaria, tuberculosis, and hepatitis. Our work has also expanded into cancer, diabetes, hypertension, and other non-communicable diseases, and we work to accelerate the rollout of lifesaving vaccines, reduce maternal and child mortality, combat chronic malnutrition, and increase access to assistive technology. We are investing in horizontal approaches to strengthen health systems through programs in human resources for health, digital health, and health financing. With each new and innovative program, our strategy is grounded in maximizing sustainable impact at scale, ensuring that governments lead the solutions, that programs are designed to scale nationally, and learnings are shared globally. At CHAI, our people are our greatest asset, and none of this work would be possible without their talent, time, dedication and passion for our mission and values. We are a highly diverse team of enthusiastic individuals across 40 countries with a broad range of skillsets and life experiences. CHAI is deeply grounded in the countries we work in, with majority of our staff based in programming countries. In India, CHAI works in partnership with its India registered affiliate William J Clinton Foundation (WJCF) under the guidance of the Ministry of Health and Family Welfare (MoHFW) at the Central and States' levels on an array of high priority initiatives aimed at improving health outcomes. Currently, WJCF supports government partners across projects to expand access to quality care and treatment for HIV/AIDS, Hepatitis, tuberculosis, COVID-19, common cancers, sexual and reproductive health, immunization, and essential medicines. Learn more about our exciting work: http://www.clintonhealthaccess.org Program Overview: India continues to bear the world’s highest burden of tuberculosis (TB) in terms of absolute numbers of incident TB cases. National TB prevalence survey (2019-21) revealed a significant 31.3% (estimated) crude prevalence of TB infection (TBI) among India’s population aged 15 years and above. Moreover, India has set an ambitious target of eliminating TB by 2025. The National Strategic Plan 2017–2025 outlines a critical target of initiating 95% of identified/eligible TBI cases on TB Preventive Treatment (TPT) by 2025. The TB Household Contact Management (TB HCM) project is a pioneering initiative addressing critical gaps in coverage and completion of TPT amongst household contacts of notified drug sensitive pulmonary TB patients, with particular focus on under five (U5) children. Planned to be implemented in Bihar and Uttar Pradesh, this four-year TB HCM project aims to impact over 2.5 million individuals through a community-based service delivery model that leverages community health workers from the National Tuberculosis Elimination Programme (NTEP) and general health systems. As the first large-scale implementation of TPT while focusing on Universal Health Coverage strategies, the project focuses on decentralizing and strengthening TB care within general health systems. Additionally, it incorporates an impact evaluation component, further enhancing its significance in advancing TB prevention and care in alignment with national health priorities and international best practices. Position Summary: WJCF seeks a highly motivated, results-oriented Senior Research Associate to support the TB HCM project, reporting to the National Monitoring, Evaluation & Research Manager. The role involves supporting study implementation, coordinating evaluation activities, providing technical input, and contributing to evidence generation to advance TB prevention strategies. The ideal candidate is a strategic thinker with strong leadership, analytical, and problem-solving skills, capable of working independently and collaboratively in a fast-paced, multicultural environment with appropriate guidance and mentorship. The Senior Research Associate will support engagement with government counterparts, donors, and external partners, and work across WJCF/CHAI teams to ensure project success. Responsibilities: 1. Coordination of external evaluation activities –40% Support and coordinate communication with the evaluation agency, ensuring alignment between the evaluation and program implementation, with the objective of ensuring timely information flow regarding any risks to the core elements of the program Support fieldwork for the planned RCT embedded within the program, ensuring high-quality data collection training. The candidate will also be expected to establish quality control mechanisms, implement them, and provide regular updates to the core national and global teams. Proactively identify and address any challenges affecting the design and implementation of the evaluation. Serve as the primary day-to-day point of contact for the evaluation agency, managing ongoing coordination activities not explicitly listed above, and ensuring the evaluation and implementation processes remain aligned under the guidance of the senior team. 2. Technical review and input – 25% Contribute to the technical review of study protocols, instruments, evaluation design, and analysis plans, in collaboration with the broader technical team Support the design, refinement, and implementation of an embedded randomized controlled trial (RCT) and other qualitative components (e.g., process evaluations, qualitative interviews) to assess the impact of the CbHCM model Assist with the submission of study tools to the Institutional Review Board (IRB) and other relevant Indian authorities (such as HMSC), as required Where needed, analyze quantitative data using Stata or other statistical software. Additionally, they contribute to the design of qualitative tools and assist in their implementation and analysis, including transcript coding using appropriate qualitative analysis software Collaborate with the technical team to respond to donor inquiries related to the impact evaluation and/or data from routine program monitoring 3. Evidence generation & Synthesis of learning – 35% Conduct primary and secondary research to address learning and evidence gaps in strategically relevant areas of implementation and evaluation. Support the in-country learning agenda by identifying and addressing evidence gaps for NTEP and CHAI/WJCF through complementary analyses. Participate in systematic reviews of secondary literature on related themes and maintain a bibliography of key citations using reference management software Work closely with the National Monitoring & Evaluation Manager to align evaluation and program monitoring workstreams. Contribute to synthesizing learnings from implementation and evaluation efforts to inform new ideas and guide intervention design Support the development and delivery of learning and dissemination materials, including reports, manuscripts, and other documentation Qualifications: Bachelor’s or Master’s in epidemiology, economics, biostatistics, or a related field with significant focus on quantitative skills (e.g., epidemiology and public/global health) with a strong understanding of inferential statistics). Minimum 5 years of applied work experience in resource-limited settings and/or a field requiring analytical problem-solving. Technical Skills: Strong command of experimental, quasi-experimental study designs and qualitative research methods Experience in designing and implementing quantitative models and/or impact evaluation and/or qualitative research; fluency in concepts of statistical inference and data analysis Strong skills in quantitative modeling, data management, and statistical analysis using software like Stata/R Demonstrated experience with data collection workflows and platforms, such as SurveyCTO, Google sheets or similar tools Demonstrated experience with or involvement in the implementation of RCTs/Or quasi experimental or similar studies in India Experience piloting survey instruments, training data collectors, and leading field logistics for large-scale studies Stakeholder management and communication: An ability to communicate complex concepts clearly and support the development of actionable recommendations for a range of audiences including Ministries of Health, global donors and policy makers Strong interpersonal skills, and an ability to navigate multi-cultural, multi-stakeholder situations collaboratively to achieve intended results Organization, time management and self-motivation: Exceptional organizational skills and ability to approach complex problems in a structured manner Strong ability to work independently, to develop and execute work-plans, and to achieve specified goals with limited guidance and oversight in a fast-paced environment Demonstrated capacity to thrive in a work environment that requires effective balancing across parallel workstreams and deliverables Willingness to travel (at least 25%) to Bihar and Uttar Pradesh Last Date to Apply: 27th July, 2025
Posted 1 month ago
40.0 years
5 - 10 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216765 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 27, 2025 CATEGORY: Information Systems Principal Data Scientist Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
7.0 years
25 - 35 Lacs
India
On-site
AI Lead – Generative & Agentic AI Systems Experience: 7–10 Years Location: Hyderabad (Hybrid) Employment Type: Full-Time About the Role: We are seeking a visionary and hands-on AI Lead to architect, build, and scale next-generation Generative and Agentic AI systems. In this role, you will drive the end-to-end lifecycle—from research and prototyping to production deployment—guiding a team of AI engineers and collaborating cross-functionally to deliver secure, scalable, and impactful AI solutions across multimodal and LLM-based ecosystems. Key Responsibilities: Architect and oversee the development of GenAI and Agentic AI workflows, including multi-agent systems and LLM-based pipelines. Guide AI engineers in best practices for RAG (Retrieval-Augmented Generation), prompt engineering, and agent design. Evaluate and implement the right technology stack: open source (Hugging Face, LangChain, LlamaIndex) vs. closed source (OpenAI, Anthropic, Mistral). Lead fine-tuning and adapter-based training (e.g., LoRA, QLoRA, PEFT). Drive inference optimization using quantization, ONNX, TensorRT, and related tools. Build and refine RAG pipelines using embedding models, vector DBs (FAISS, Qdrant), chunking strategies, and hybrid knowledge graph systems. Manage LLMOps with tools like Weights & Biases, MLflow, and ClearML, ensuring experiment reproducibility and model versioning. Design and implement evaluation frameworks for truthfulness, helpfulness, toxicity, and hallucinations. Integrate guardrails, content filtering, and data privacy best practices into GenAI systems. Lead development of multi-modal AI systems (VLMs, CLIP, LLaVA, video-text fusion models). Oversee synthetic data generation for fine-tuning in low-resource domains. Design APIs and services for Model-as-a-Service (MaaS) and AI agent orchestration. Collaborate with product, cloud, and infrastructure teams to align on deployment, GPU scaling, and cost optimization. Translate cutting-edge AI research into usable product capabilities, from prototyping to production. Mentor and grow the AI team, establishing R&D best practices and benchmarks. Stay up-to-date with emerging trends (arXiv, Papers With Code) to keep the organization ahead of the curve. Required Skills & Expertise: AI & ML Foundations: Generative AI, LLMs, Diffusion Models, Agentic AI Systems, Multi-Agent Planning, Prompt Engineering, Feedback Loops, Task Decomposition Ecosystem & Frameworks: Hugging Face, LangChain, OpenAI, Anthropic, Mistral, LLaMA, GPT, Claude, Mixtral, Falcon, etc. Fine-tuning & Inference: LoRA, QLoRA, PEFT, ONNX, TensorRT, DeepSpeed, vLLM Data & Retrieval Systems: FAISS, Qdrant, Chroma, Pinecone, Hybrid RAG + Knowledge Graphs MLOps & Evaluation: Weights & Biases, ClearML, MLflow, Evaluation metrics (truthfulness, helpfulness, hallucination) Security & Governance: Content moderation, data privacy, model alignment, ethical constraints Deployment & Ops: Cloud (AWS, GCP, Azure) with GPU scaling, Serverless LLMs, API-based inference, Docker/Kubernetes Other: Multi-modal AI (images, video, audio), API Design (Swagger/OpenAPI), Research translation and POC delivery Preferred Qualifications: 7+ years in AI/ML roles, with at least 2–3 years in a technical leadership capacity Proven experience deploying LLM-powered systems at scale Experience working with cross-functional product and infrastructure teams Contributions to open-source AI projects or published research papers (a plus) Strong communication skills to articulate complex AI concepts to diverse stakeholders Why Join Us? Work at the forefront of AI innovation with opportunities to publish, build, and scale impactful systems Lead a passionate team of engineers and researchers Shape the future of ethical, explainable, and usable AI products Ready to shape the next wave of AI? Apply now and join us on this journey! Job Types: Full-time, Permanent Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Benefits: Flexible schedule Health insurance Provident Fund Supplemental Pay: Joining bonus Work Location: In person
Posted 1 month ago
5.0 years
50 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us: Headquartered in Sunnyvale, with offices in Dallas & Hyderabad, Fission Labs is a leading software development company, specializing in crafting flexible, agile, and scalable solutions that propel businesses forward. With a comprehensive range of services, including product development, cloud engineering, big data analytics, QA, DevOps consulting, and AI/ML solutions, we empower clients to achieve sustainable digital transformation that aligns seamlessly with their business goals. Role Overview: We are seeking an experienced AI Product Manager to own the lifecycle of machine-learning and generative-AI products—from concept through launch and iteration. You’ll translate market and user insights into clear roadmaps, partner with engineering and data-science teams to deliver scalable solutions, and measure impact against business KPIs. Experience: 7 – 9years overall (incl.3 + years in ML / data-platform products) Only Immediate joiners preferred!! Location: Hyderabad Responsibilities: Product strategy & vision Define long-term AI product strategy aligned with company OKRs and market trends. Champion user problems, competitive differentiators, and regulatory considerations. Roadmap & backlog ownership Convert strategy into epics, user stories, and acceptance criteria; prioritise using data-driven frameworks. Maintain a groomed backlog and lead sprint planning with cross-functional squads. End-to-end delivery Collaborate daily with engineering, data science, UX, QA, and DevOps to ship high-quality releases. Drive trade-off decisions on model choices (e.g., classical ML vs. LLM), latency, cost, and privacy. User & stakeholder engagement Conduct discovery interviews, usability testing, and beta programs with target users (developers, analysts, business owners). Present roadmaps, release demos, and performance updates to executive leadership and customers. Metrics & continuous improvement Define success KPIs (adoption, NPS, accuracy, cost per inference, ARR). Monitor dashboards and run A/B tests to optimize. Capture feedback loops to iterate on prompts, models, or feature sets. Market & ecosystem awareness Track advances in AI frameworks, vector databases, model hosting, and responsible-AI standards. Evaluate build / partner / buy options and recommend platform integrations or third-party partnerships. Qualifications : 7-9 years total product-management experience, including 3+ years with AI / ML / data-platform products. Proven track record shipping B2B SaaS, platform APIs, or developer-tooling features end-to-end. Hands-on familiarity with AI concepts: supervised vs. unsupervised learning, model evaluation, prompt engineering, retrieval-augmented generation, MLOps pipelines. Ability to write clear PRDs, epics, and technical diagrams; comfortable using Jira, Confluence, or equivalent. Strong analytical skills; fluent with SQL/BI tools for product metrics. Excellent communication and stakeholder-management skills across business and technical audiences. Bachelor’s degree in Computer Science, Engineering, Data Science, or equivalent practical experience. Preferred / nice to have Experience with vector databases, LLM fine-tuning, or agentic AI orchestration. Understanding of cloud AI services (AWS, GCP, Azure) and cost-optimisation levers. Familiarity with responsible-AI frameworks and governance in regulated industries. MBA from Tier-1 Institution
Posted 1 month ago
2.0 - 31.0 years
2 - 4 Lacs
Sri Nagar Colony, Hyderabad
On-site
Web Developer - Full Stack with AI IntegrationKey ResponsibilitiesCollaborate with cross-functional teams to define, design, and ship new features including AI-powered capabilities Develop server-side logic using Node.js, ensuring high performance and responsiveness to requests from front-end components Build reusable and efficient front-end components using React.js, including AI-driven interfaces and data visualization Integrate AI/ML APIs (OpenAI, Google AI, AWS AI services) and implement features like chatbots, content generation, and recommendation systems Implement and maintain API integrations with third-party services and AI platforms Optimize applications for maximum speed and scalability, including efficient AI model inference and response caching Collaborate with team members to troubleshoot, debug, and optimize application performance across traditional and AI systems Stay up-to-date with emerging AI technologies, web development frameworks, and industry trends Implement security and data protection measures, including secure handling of AI model inputs/outputs Participate in code reviews and provide constructive feedback on both web development and AI integration practices Deploy applications on AWS and manage cloud infrastructure, utilizing AI/ML services like SageMaker and Bedrock Develop prompt engineering strategies and build user interfaces for AI model interactions and monitoring
Posted 1 month ago
15.0 years
0 Lacs
Delhi, India
On-site
The Clinton Health Access Initiative, Inc. (CHAI) is a global health organization committed to the mission of saving lives and reducing the burden of disease in low-and middle-income countries. We work at the invitation of governments to support them and the private sector to create and sustain high-quality health systems. CHAI was founded in 2002 in response to the HIV/AIDS epidemic with the goal of dramatically reducing the price of life-saving drugs and increasing access to these medicines in the countries with the highest burden of the disease. Over the following two decades, CHAI has expanded its focus. Today, along with HIV, we work in conjunction with our partners to prevent and treat infectious diseases such as COVID-19, malaria, tuberculosis, and hepatitis. Our work has also expanded into cancer, diabetes, hypertension, and other non-communicable diseases, and we work to accelerate the rollout of lifesaving vaccines, reduce maternal and child mortality, combat chronic malnutrition, and increase access to assistive technology. We are investing in horizontal approaches to strengthen health systems through programs in human resources for health, digital health, and health financing. With each new and innovative program, our strategy is grounded in maximizing sustainable impact at scale, ensuring that governments lead the solutions, that programs are designed to scale nationally, and learnings are shared globally. At CHAI, our people are our greatest asset, and none of this work would be possible without their talent, time, dedication and passion for our mission and values. We are a highly diverse team of enthusiastic individuals across 40 countries with a broad range of skillsets and life experiences. CHAI is deeply grounded in the countries we work in, with majority of our staff based in programming countries. In India, CHAI works in partnership with its India registered affiliate William J Clinton Foundation (WJCF) under the guidance of the Ministry of Health and Family Welfare (MoHFW) at the Central and States' levels on an array of high priority initiatives aimed at improving health outcomes. Currently, WJCF supports government partners across projects to expand access to quality care and treatment for HIV/AIDS, Hepatitis, tuberculosis, COVID-19, common cancers, sexual and reproductive health, immunization, and essential medicines. Learn more about our exciting work http//www.clintonhealthaccess.org Program Overview India continues to bear the world’s highest burden of tuberculosis (TB) in terms of absolute numbers of incident TB cases. National TB prevalence survey (2019-21) revealed a significant 31.3% (estimated) crude prevalence of TB infection (TBI) among India’s population aged 15 years and above. Moreover, India has set an ambitious target of eliminating TB by 2025. The National Strategic Plan 2017-2025 outlines a critical target of initiating 95% of identified/eligible TBI cases on TB Preventive Treatment (TPT) by 2025. The TB Household Contact Management (TB HCM) project is a pioneering initiative addressing critical gaps in coverage and completion of TPT amongst household contacts of notified drug sensitive pulmonary TB patients, with particular focus on under five (U5) children. Planned to be implemented in Bihar and Uttar Pradesh, this four-year TB HCM project aims to impact over 2.5 million individuals through a community-based service delivery model that leverages community health workers from the National Tuberculosis Elimination Programme (NTEP) and general health systems. As the first large-scale implementation of TPT while focusing on Universal Health Coverage strategies, the project focuses on decentralizing and strengthening TB care within general health systems. Additionally, it incorporates an impact evaluation component, further enhancing its significance in advancing TB prevention and care in alignment with national health priorities and international best practices. Position Summary WJCF seeks a highly motivated, results-oriented Senior Research Associate to support the TB HCM project, reporting to the National Monitoring, Evaluation & Research Manager. The role involves supporting study implementation, coordinating evaluation activities, providing technical input, and contributing to evidence generation to advance TB prevention strategies. The ideal candidate is a strategic thinker with strong leadership, analytical, and problem-solving skills, capable of working independently and collaboratively in a fast-paced, multicultural environment with appropriate guidance and mentorship. The Senior Research Associate will support engagement with government counterparts, donors, and external partners, and work across WJCF/CHAI teams to ensure project success. Coordination of external evaluation activities -40% Support and coordinate communication with the evaluation agency, ensuring alignment between the evaluation and program implementation, with the objective of ensuring timely information flow regarding any risks to the core elements of the program Support fieldwork for the planned RCT embedded within the program, ensuring high-quality data collection training. The candidate will also be expected to establish quality control mechanisms, implement them, and provide regular updates to the core national and global teams. Proactively identify and address any challenges affecting the design and implementation of the evaluation. Serve as the primary day-to-day point of contact for the evaluation agency, managing ongoing coordination activities not explicitly listed above, and ensuring the evaluation and implementation processes remain aligned under the guidance of the senior team. Technical review and input - 25% Contribute to the technical review of study protocols, instruments, evaluation design, and analysis plans, in collaboration with the broader technical team Support the design, refinement, and implementation of an embedded randomized controlled trial (RCT) and other qualitative components (e.g., process evaluations, qualitative interviews) to assess the impact of the CbHCM model Assist with the submission of study tools to the Institutional Review Board (IRB) and other relevant Indian authorities (such as HMSC), as required Where needed, analyze quantitative data using Stata or other statistical software. Additionally, they contribute to the design of qualitative tools and assist in their implementation and analysis, including transcript coding using appropriate qualitative analysis software Collaborate with the technical team to respond to donor inquiries related to the impact evaluation and/or data from routine program monitoring Evidence generation & Synthesis of learning - 35% Conduct primary and secondary research to address learning and evidence gaps in strategically relevant areas of implementation and evaluation. Support the in-country learning agenda by identifying and addressing evidence gaps for NTEP and CHAI/WJCF through complementary analyses. Participate in systematic reviews of secondary literature on related themes and maintain a bibliography of key citations using reference management software Work closely with the National Monitoring & Evaluation Manager to align evaluation and program monitoring workstreams. Contribute to synthesizing learnings from implementation and evaluation efforts to inform new ideas and guide intervention design Support the development and delivery of learning and dissemination materials, including reports, manuscripts, and other documentation Bachelor’s or Master’s in epidemiology, economics, biostatistics, or a related field with significant focus on quantitative skills (e.g., epidemiology and public/global health) with a strong understanding of inferential statistics). Minimum 5 years of applied work experience in resource-limited settings and/or a field requiring analytical problem-solving. Technical Skills Strong command of experimental, quasi-experimental study designs and qualitative research methods Experience in designing and implementing quantitative models and/or impact evaluation and/or qualitative research; fluency in concepts of statistical inference and data analysis Strong skills in quantitative modeling, data management, and statistical analysis using software like Stata/R Demonstrated experience with data collection workflows and platforms, such as SurveyCTO, Google sheets or similar tools Demonstrated experience with or involvement in the implementation of RCTs/Or quasi experimental or similar studies in India Experience piloting survey instruments, training data collectors, and leading field logistics for large-scale studies Stakeholder management and communication An ability to communicate complex concepts clearly and support the development of actionable recommendations for a range of audiences including Ministries of Health, global donors and policy makers Strong interpersonal skills, and an ability to navigate multi-cultural, multi-stakeholder situations collaboratively to achieve intended results Organization, time management and self-motivation Exceptional organizational skills and ability to approach complex problems in a structured manner Strong ability to work independently, to develop and execute work-plans, and to achieve specified goals with limited guidance and oversight in a fast-paced environment Demonstrated capacity to thrive in a work environment that requires effective balancing across parallel workstreams and deliverables Willingness to travel (at least 25%) to Bihar and Uttar Pradesh Last Date to Apply 27th July, 2025
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough