Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
About Innovix Pro Innovix Pro is a pioneering engineering R&D and manufacturing firm headquartered in Ahmedabad. We design and manufacture a diverse range of Robotics Products for both commercial and consumer sectors. Our flagship product, Curio , is a guide robot for museums, showcasing our commitment to innovation. We also provide Functional Prototyping, Engineering Design, Additive Manufacturing, Batch Manufacturing, and Custom Product Development . Our team combines expertise in Engineering Design, Electronics PCB Development, Software, and AI integration , transforming ideas into high-precision real-world solutions. Role Overview We are seeking a Lead Robotics Engineer with deep expertise in ROS 2 (Robot Operating System 2) for developing autonomous and socially interactive robots . The role involves leading the design, development, and optimization of ROS 2-based robotic platforms , integrating vision, LiDAR, custom actuators, and AI frameworks to deliver robust, production-ready solutions. Key Responsibilities ROS 2 Development & Middleware Develop and implement ROS 2 nodes, packages, and launch files using rclcpp and rclpy . Utilize DDS (Data Distribution Service) middleware (Fast DDS, Cyclone DDS, RTI Connext) for efficient inter-node communication. Design and manage ROS 2 workspaces (colcon, ament_cmake) with proper package dependencies. Implement ROS 2 parameters, topics, services, actions, and lifecycle nodes for modular design. Leverage ros_control, ros2_control, and MoveIt 2 for actuator and manipulator control. Localization, Mapping & Navigation Implement Nav2 (Navigation2 stack) for path planning, obstacle avoidance, and navigation in indoor/outdoor environments. Apply SLAM Toolboxes (Cartographer, Gmapping, RTAB-Map, ORB-SLAM2/3) for localization and environment mapping. Integrate sensor fusion using robot_localization (EKF/UKF filters for IMU, GPS, and wheel odometry). Tune PID controllers for accurate wheel odometry and trajectory tracking. Perception & AI Integration Integrate camera drivers, LiDAR drivers, depth sensors with ROS 2 ecosystems. Implement vision-based pipelines with image_transport, cv_bridge, and OpenCV . Deploy deep learning frameworks (YOLOv5/v8, TensorFlow, PyTorch, OpenVINO, NVIDIA TensorRT) for object detection and tracking. Develop AI-driven behaviors and multi-sensor fusion for robust decision-making. Multi-Robot Systems & Communication Architect multi-robot frameworks with ROS 2 for distributed navigation and task allocation . Implement ROS 2 namespaces, remapping, and multi-master setups for fleet operation. Optimize QoS (Quality of Service) policies for real-time communication in dynamic environments. System Optimization & Deployment Develop lightweight and resource-optimized ROS 2 applications suitable for embedded platforms (Jetson, Raspberry Pi, Intel NUC). Manage deployment using Docker containers with ROS 2 for reproducibility and scalability. Benchmark system performance and optimize computational loads for vision, SLAM, and navigation. Integrate ROS 2 with Ubuntu (20.04/22.04), Windows Subsystems, and Android . Collaboration & Leadership Lead a cross-functional team of electronics, mechanical, and software engineers. Mentor junior robotics engineers in ROS 2 best practices and coding standards . Ensure robust system design from prototype to production deployment . Qualifications Bachelor’s/Master’s degree in Robotics, Electronics, Mechatronics, or related fields. 4–5 years of proven experience in developing ROS 2-based autonomous platforms (industrial or commercial robots). Strong expertise in: ROS 2 middleware (rclcpp, rclpy, DDS, colcon, ament_cmake) Nav2 stack and SLAM frameworks (Cartographer, RTAB-Map, ORB-SLAM) ros2_control, MoveIt 2, robot_localization Computer Vision (OpenCV, YOLO, TensorRT, DeepStream) Sensor integration (LiDAR, IMU, cameras, depth sensors, GPS/RTK) PID tuning, motion control, and multi-robot systems Proficient in C++17/20 and Python 3 for robotics development. Hands-on experience with simulation frameworks (Gazebo, RViz2, Ignition, Webots, Unity Robotics) . Strong analytical, debugging, and system optimization skills. Excellent communication and leadership capabilities. Preferred Skills (Good to Have) Experience in ROS 2 Galactic/Humble/Iron distributions . Knowledge of real-time systems (RT Preempt, micro-ROS) . Cloud robotics and IoT integration with ROS 2 (AWS RoboMaker, FogROS2). Experience in outdoor navigation with drones (GPS + vision fusion, PX4/ArduPilot + ROS 2 integration). Why Join Innovix Pro? Lead cutting-edge ROS 2 robotics projects in India’s growing robotics ecosystem. Work on socially interactive autonomous robots, drones, and industrial automation systems . Contribute to real-world deployments of advanced autonomous robotics platforms. Growth opportunities as a technical leader in Robotics R&D .
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Software Engineer – Geometric Vision Location: Bangalore, India Role Level: 2-4 years of work experience in a similar background (Computer Vision, 3D Geometry, Localization). Salary: 18 L Key Requirements: • Design, development and implementation of on-board computer vision algorithms, using both classical and modern methods, meant for real-time robot localization in an indoor warehouse setting. • Contribute to geometric vision algorithms involving 3D computer vision tasks such as dense/sparse 3D reconstruction, Multiview pose estimation, RGBD fusion, 3D object detection and 3D dimensioning. • Implement sensor fusion techniques to integrate data from multiple sensors such as LIDAR, Cameras, IMU and ToF to enhance the accuracy and robustness of vehicle navigation. • Strong Python programming skills with an understanding of deep learning frameworks and workflow. Responsible for conducting thorough testing and debugging of Python code, ensuring the reliability and robustness of pipeline implementations. • Contribute to the product architecture, make decisions on sensor selections, and deploy solutions in various customer sites and gauge system efficiency on a day-today basis. • Ownership of the development and implementation of pipelines for computer vision and machine learning algorithms and packaging software for efficiency. • Hands-on experience in computer vision and machine learning projects, solving real- world problems involving vision tasks. • Ability to understand problem statements and implement and test new ideas under supervision. • Maintain comprehensive documentation of code, algorithms, and pipelines. • Capability to work collaboratively in a team and communicate effectively. Must Have: • Strong Python programming skills. Responsible for conducting thorough testing and debugging of Python code, ensuring the reliability and robustness of pipeline implementations. • Strong foundation in physics and robotics systems – including ROS, 3D simulations and sensor fusion based navigation algorithms. • Proven experience in developing Navigation and Localization algorithms for autonomous vehicles, robotics, drones, maritime vehicles or similar application • Hands-on experience in computer vision (classical and modern), visual odometry and SLAM projects, solving real-world problems involving vision, navigation and localization tasks. • Deep mathematical foundations with knowledge of 3D Multi-view Geometry, Advanced Linear Algebra, Numerical Optimization, etc. • Deep insights into depth cameras and 3D data characteristics. • Ownership of the development and implementation of modules for computer vision and packaging software for efficiency. • Maintain comprehensive documentation of code, algorithms, and pipelines. • Capability to work collaboratively in a team and communicate effectively.
Posted 1 month ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🧠 About the Role We are looking for an Autonomous Systems Engineer with strong experience in robotics, controls, estimation, and embedded software. This is a hands-on role where you'll design, implement, and deploy autonomy stacks for UGVs, working across localization, control, sensor fusion, motion planning, and real-time deployment . Your work will be used in the field, integrated into platforms that navigate unstructured environments autonomously. 🔧 What You’ll Work On- 🧭 Localization & Sensor Fusion Implement EKF, UKF, or factor graphs to fuse IMU, wheel odometry, GPS (RTK/L1), magnetometer, and vision-based odometry Real-time dead-reckoning and pose estimation Handle time synchronization, sensor latency, and covariance propagation 🎯 Control Systems Design PID, LQR, and hybrid control architectures for skid-steer / Ackermann platforms Interface via CAN, UART, or shared memory with low-level firmware Trajectory tracking using spline-based and discrete path followers 🛤 Motion Planning Implement planners like A*, D*, RRT*, DWA, and spline-based methods Integrate costmaps, dynamic constraints, and real-time path generation Develop local obstacle avoidance using potential/vector fields 🤖 Robot Modeling Derive forward/inverse kinematics and dynamic models (Newton-Euler, Lagrangian) Handle slip, disturbance modeling, and Jacobian computation Manage SE(3) transforms across body, sensor, map, and ENU/NED frames ⚙️ System Integration (ROS 2) Develop modular autonomy software in ROS 2 using nodes, messages, actions Build architecture for localization, control, planning, and perception Integrate diagnostics, failsafe logic, and heartbeat systems 👁️ Perception (Preferred) Use LiDAR, stereo, depth, or event cameras for terrain analysis and obstacle detection Develop point cloud pipelines (e.g., voxel grid, NDT) and basic semantic segmentation 🧰 Tech Stack & Tools Languages: C++17/20 (multi-threading, hardware abstraction), Python Frameworks: ROS 2 (rclcpp, nav2), CMake, colcon, DDS Libraries: Eigen, Sophus, Ceres Solver, NumPy/SciPy Sim & Debug: RViz, Gazebo, Isaac Sim, rosbag, custom loggers Hardware: Jetson, STM32, RTOS, CAN, SPI, I2C ✅ What You Bring B.Tech / M.Tech / Ph.D. in Robotics, Mechatronics, Controls, or CS/EE with robotics specialization 4+ years of hands-on experience in real-world robot autonomy Strong fundamentals in: Kinematics & Dynamics Estimation & Filtering Feedback & Motion Control C++ and Linux-based robotics development Proven deployment on physical platforms (not just simulations) 🎯 Why Join Us? Work at the frontier of autonomous mobility Own your systems end-to-end, from design to deployment Collaborate with a passionate, tight-knit robotics team See your code power real UGVs in live environments
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Perception Engineering Intern / Apprentice at 10xConstruction, you will help our autonomous drywall-finishing robots "see" the job-site. You'll design and deploy perception pipelines—camera + LiDAR fusion, deep-learning vision models, and point-cloud geometry—to give the robot the awareness it needs. Key Responsibilities Build ROS 2 nodes for 3-D point-cloud ingestion, filtering, voxelisation and wall-plane extraction (PCL / Open3D) Train and integrate CNN / Transformer models for surface-defect detection and semantic segmentation Implement RANSAC-based pose, plane and key-point estimation; refine with ICP or Kalman/EKF loops Fuse LiDAR, depth camera, IMU and wheel odometry data for robust SLAM and obstacle avoidance Optimise and benchmark models on Jetson-class edge devices with TensorRT / ONNX Runtime Collect, label and augment real & synthetic datasets; automate experiment tracking (Weights & Biases, MLflow) Collaborate with manipulation, navigation and cloud teams to ship end-to-end, production-ready perception stacks Qualifications & Skills Solid grasp of linear algebra, probability and geometry; coursework or projects in CV or robotics perception Proficient in **Python 3.x and C++17/20**; comfortable with git and CI workflows Experience with **ROS 2 (rclcpp / rclpy)** and custom message / launch setups Familiarity with **deep-learning vision** (PyTorch or TensorFlow)—classification, detection or segmentation Hands-on work with **point-cloud processing** (PCL, Open3D); know when to apply voxel grids, KD-trees, RANSAC or ICP Bonus: exposure to camera-LiDAR calibration, or real-time optimisation libraries (Ceres, GTSAM) Why Join Us Work side-by-side with founders and senior engineers to redefine robotics in construction Build tech that replaces dangerous, repetitive wall-finishing labor with intelligent autonomous systems Help shape not just a product, but an entire company—and see your code on real robots at active job-sites Requirements Python 3.x C++17/20 ROS 2 PyTorch Open3D RANSAC
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Pune, Maharashtra
Remote
Location: Pune Qualification: Bachelor's degree in Robotics, Computer science, Electrical engineering, or related field. Role Overview: We are seeking a Senior SLAM Engineer to join our core autonomy team. This role will focus on developing and optimizing 3D LiDAR-based SLAM solutions, enabling reliable and accurate localization and mapping for autonomous robots deployed across complex, real-world environments. Preferred Qualifications: Experience with loop closure, global mapping, and multi-session SLAM. Familiarity with PCL, Open3D, or similar libraries for point cloud processing. Contributions to open-source SLAM projects or publications in leading robotics conferences (e.g., ICRA, IROS, RSS). Knowledge of embedded systems and performance optimization on resource- constrained hardware. Experience working with remote or distributed robotics teams. Responsibilities: Develop robust, real-time SLAM systems using 3D LiDARs and other onboard sensors. Design and implement factor graph-based optimization pipelines (e.g., GTSAM, Ceres, iSAM2). Integrate SLAM with IMU, wheel odometry, and vision-based systems for sensor fusion. Continuously test and validate SLAM modules across diverse real-world deployments — including crowded airports, large malls, and hospitals. Collaborate closely with perception, controls, and navigation teams to ensure seamless robot autonomy. Profile and optimize SLAM performance for embedded compute platforms. Contribute to the long-term SLAM roadmap and evaluation infrastructure. Minimum Qualifications: Bachelor’s degree in Robotics, Computer Science, Electrical Engineering, or a related field. 3+ years of hands-on experience developing SLAM systems for autonomous robots. Proven expertise with 3D LiDAR-based SLAM in large, dynamic indoor environments. Strong grasp of probabilistic estimation, sensor fusion, and graph optimization. Proficient in C++ and Python in a Linux-based development environment. Experience with GTSAM, iSAM2, or Ceres Solver. Familiarity with ROS/ROS2 and standard robotics middleware. A track record of deploying SLAM solutions in real-world applications. Job Types: Full-time, Permanent Pay: Up to ₹1,500,000.00 per year Schedule: Day shift Education: Bachelor's (Preferred) Experience: SLAM: 2 years (Required) GTSAM, iSAM2, or Ceres Solver.: 2 years (Required) C++ and Python in a Linux-based development environment.: 2 years (Preferred) Location: Pune, Maharashtra (Required) Work Location: In person
Posted 2 months ago
0 years
0 Lacs
Rajkot, Gujarat, India
On-site
Stride Dynamics We are an early-stage Robotics startup developing autonomous aerial robots. We are IIT Kanpur Alumni with extensive experience building autonomous systems for government, defence and enterprises in India and abroad. With Stride Dynamics, we envision leading the next generation of autonomous aerial robots in India and making global standard products for defence, government and enterprises. The Role We are looking for someone with a passion for working on hardware and autonomous systems. As a robotics engineer, you will work on our core technology for autonomous flight and contribute from conceptualisation to deployment. You will have the opportunity to work on concepts like localisation, controls, perception, navigation, and planning. We are working on developing aerial vehicles with very robust localisation, enabling them to navigate in any conditions (indoors, dark, dusty, high altitude GNSS jamming scenarios, etc.). The Work Design, develop and debug the autonomy software stack for our systems. Work on computer vision, learning based perception, and localisation for aerial systems. A lot of testing in real-world environments. Document and maintain efficient, modular, and reliable C++ code. Develop and improve algorithms for various autonomy modules. Research, understand and implement state-of-the-art methods.. We’re looking for someone with Experience with hardware and implementing algorithms. Experience in C++, Python and ROS. Experience with computer vision, localisation (filtering, PGO, visual odometry). Has Experience with Linux Development Environment and tools like CMake, Git, etc. Bonus if you: Have hands-on experience with robots in the form of projects or competitions. Experience/knowledge of Deep Learning based approaches in Robotics. Experience with GPU/VPU-accelerated programming (eg, CUDA, OpenCL). Published research in the Robotics domain. If you match the above, why us Work in a culture that celebrates innovation, creativity, and the freedom to challenge the status quo. Work with a team of people who are passionate about hardware and robotics. Join us and help us design the future of drones! Apart from the above job description, if you think you can contribute in any other domains (eg, embedded software, hardware, machine learning), feel free to reach out to us. Show more Show less
Posted 3 months ago
1.0 years
0 Lacs
Hyderabad, Telangana
On-site
Job Description We are looking for a passionate and skilled Robotics AI/ML Engineer to join our team in developing intelligent and autonomous drone systems. You will lead the development of drone software stacks, integrating onboard intelligence (AI/ML) with robotic middleware (ROS/ROS2) and backend systems. The ideal candidate has at least 1 year of hands-on experience in building real-world robotics or drone software, with strong command over ROS/ROS2, computer vision, and machine learning applied to autonomous navigation, perception, or decision-making. Key Responsibilities Drone Software Development Build and maintain core ROS/ROS2-based software for autonomous flight, navigation, and perception Develop real-time systems to handle sensor fusion, path planning, obstacle avoidance, and mission execution Implement algorithms for drone localization (GPS, SLAM, visual odometry) and mapping AI/ML Integration Develop and train AI/ML models for perception (e.g., object detection, tracking, segmentation) Deploy and optimize AI models on edge hardware (Jetson, Raspberry Pi, Odroid, etc.) Work on multi-camera vision, lidar fusion, and real-time inference pipelines System Integration & Backend Communication Integrate drone software with backend/cloud systems using ROSBridge, WebSockets, MQTT, or custom APIs Build data pipelines for telemetry, health monitoring, and AI inference output Work with DevOps/Backend teams to ensure smooth interface with mission control and analytics dashboards Testing & Simulation Set up and manage simulated environments (e.g., Gazebo, Ignition) for testing flight logic and AI behavior Conduct real-world test flights with live data and iterative tuning of software models Required Qualifications Bachelor’s or Master’s degree in Robotics , Computer Science , Electrical Engineering , or related field Minimum 1 year experience building autonomous systems using ROS/ROS2 Proficient in Python and C++ with experience writing ROS nodes and launch files Experience deploying AI/ML models for perception or control (e.g., YOLO, DeepSORT, CNNs, LSTMs) Hands-on experience with drones or mobile robotics platforms (simulation or real-world) Comfortable with version control (Git), Linux environments, and debugging complex robotic systems Preferred Skills Experience with drone-specific stacks (PX4, ArduPilot, MAVROS) Experience with edge AI deployment tools (TensorRT, ONNX, OpenVINO) Familiarity with CV frameworks like OpenCV, TensorFlow, PyTorch Experience with cloud platforms for robotics (AWS RoboMaker, Azure, etc.) Understanding of control systems (PID, MPC), SLAM, or multi-agent systems Knowledge of cybersecurity best practices in robotics and IoT Job Types: Full-time, Permanent Pay: ₹180,000.00 - ₹240,000.00 per year Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Have you ever worked on Drones or built a drone? Experience: Robotics AI / ML: 1 year (Required) License/Certification: AI / ML certification (Preferred) Location: Hyderabad, Telangana (Required) Work Location: In person
Posted 3 months ago
1 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At Karman Drones , we’re building the next generation of unmanned aerial vehicles engineered for high-impact missions. We’re looking for a Senior UAV Systems Engineer who thrives in a fast-paced environment and is passionate about solving complex technical challenges across real-time, safety-critical systems. In this role, you’ll work at the intersection of control theory, embedded systems, and aerial vehicle integration—driving innovation across Fixed-Wing, VTOL, and Multirotor UAV platforms. What You’ll Do ● Architect, model, and optimize UAV systems for high-reliability flight operations. ● Lead root cause investigations for aircraft incidents and implement corrective system improvements. ● Integrate, configure, and tune autopilots (PX4, ArduPilot, custom stacks) across diverse airframes. ● Analyze flight logs, telemetry streams, and onboard sensor data to fine-tune flight performance. ● Design and validate system-level avionics including power systems, EMI mitigation, and hardware layout. ● Collaborate with cross-functional teams for mission-specific configurations and payload integration. ● Drive rigorous testing workflows—HIL/SIL simulations, validation protocols, and safety analysis. What You Bring ● Education : B.Tech or M.Tech in Mechatronics, Electronics & Communication, Aerospace Engineering, or related fields. ● Experience : ○ 2+ years of experience with a B.Tech or 1+ year with an M.Tech in UAV system development and integration. ○ Proven expertise in real-time autopilot tuning, control system calibration, and flight logic optimization using PX4, ArduPilot, or proprietary flight stacks. ○ Solid grounding in control theory, sensor fusion (EKF/UKF), PID tuning, and UAV flight dynamics. ○ Strong experience with flight data log analysis tools: Mission Planner, QGroundControl, UAVCAN monitors, uLog/BIN viewers. ○ Familiarity with MAVLink protocol (custom messages, telemetry streaming, mission control, payload management). ○ System-level understanding of UAV architecture: avionics layout, EMI/EMC best practices, power distribution, and vibration mitigation. ○ Practical knowledge of embedded autopilot platforms (e.g., Cube Orange+, Pixhawk series, STM32 boards) and peripheral integration (GNSS, magnetometer, barometer, airspeed sensor, rangefinder). ○ Hands-on experience with SIL/HIL testing for control validation using Gazebo, JSBSim, or similar simulation tools. ○ Proficient with serial bus protocols and RC systems: SBUS, PPM, DSM, CRSF. ○ Demonstrated payload integration (EO/IR gimbals, LiDAR, hyperspectral, SIGINT) and interface development for gimbal control. ○ Familiarity with fail-safe implementations and autonomous recovery (RTL, parachute deployment, geofence management). ○ Awareness of state-of-the-art avionics trends including AES-encrypted RF, FHSS, anti-jam/spoof systems, and secure telemetry systems. Bonus Skills (Preferred) ● Experience with edge computing platforms (NVIDIA Jetson, Raspberry Pi CM4, etc.) for autonomy or perception. ● Familiarity with ROS/ROS2 for mission logic or autonomous flight control. ● Knowledge of GNSS-denied navigation techniques (visual odometry, LiDAR SLAM, inertial-only navigation). ● Awareness of secure communication protocols, anti-jamming/anti-spoofing tech, and EW-resilient system design. ● Hands-on work with SDR-based payloads, RF link analysis, and antenna system integration. Why Join Karman Drones? At Karman Drones, you’ll be working on industry-defining aerial platforms designed for real-world applications in defense, mapping, surveillance, and emergency response . We are a tight-knit, innovation-driven team of engineers, technologists, and visionaries pushing the boundaries of unmanned systems in India. By joining us, you will: Contribute to real-world projects that make a national impact Work on cutting-edge UAV technology , from design to deployment Be part of a team that values innovation, speed, and continuous learning Grow your career in a fast-evolving and mission-focused startup environment How to Apply Click the “Apply Now” button. Or send your resume and relevant documents to careers@karmandrones.com Take your UAV expertise to the next level—and help shape the future of aerial technology with us. Show more Show less
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City