About the Role We are building an ultra-compact, high-performance computing module integrating FPGA/ASIC , multi-core MCU , AI accelerators (GPU/NPU) , precision timestamping (PTP/IEEE-1588) , multi-sensor interfaces , and a custom PMIC —all within an extremely small form factor. We are seeking a PCB Designer who can translate complex semiconductor, robotics, and embedded system requirements into a robust, manufacturable, high-density PCB design. The ideal candidate has hands-on experience with HDI PCB , high-speed routing , signal integrity , and miniaturized electronics . ResponsibilitiesPCB Architecture & Design Design multi-layer, high-density PCBs for systems integrating FPGA/ASIC, MCU, GPU/NPU, memory, and PMIC. Implement high-speed differential routing for MIPI-CSI/DSI, PCIe, DDR, SERDES, and Ethernet-PTP traces. Layout compact PCBs capable of supporting multiple cameras, depth sensors, IMUs, pressure/torque sensors , and motor control drivers. Apply best practices for noise filtering , analog/digital isolation, grounding, and power integrity. Sensor and Robotics Interface Integration Design interfaces for SPI, I²C, UART, CAN/CAN-FD, MIPI-CSI2 , and precision timestamping circuitry. Integrate motor driver circuits and feedback loops (encoder, torque, pressure, temperature sensors). Collaborate with FPGA/ASIC engineers to meet deterministic latency and synchronization requirements. Power & Thermal Management Work with PMIC engineers to design multi-rail power distribution networks (PDN) . Optimize power plane layout for core logic (CPU/GPU/NPU/FPGA), sensor modules, and actuators. Support thermal design (stack-up recommendations, heat spreader/heat sink placement, copper pours, thermal vias). Manufacturing & Validation Generate complete documentation: schematics, BOM, Gerbers, stack-up, fabrication notes, assembly drawings. Support prototype manufacturing, DFM/DFT reviews, and debugging builds. Work with contract manufacturers to ensure manufacturability at high volume (mass production) . Conduct PCB bring-up with hardware engineers and support initial testing/validation. RequirementsTechnical Skills 4+ years of professional PCB design experience. Strong proficiency with Altium Designer, Cadence Allegro/OrCAD, or KiCad (advanced) . Experience with high-speed PCB design (MIPI, DDR, PCIe, gigabit SERDES). Knowledge of SI/PI (Signal & Power Integrity) , impedance control, controlled stack-ups. Experience designing boards with: FPGAs or ASICs Multicore MCUs (ARM Cortex-M/A series) PMICs & multi-voltage rails On-board memory (LPDDR4/4X, DDR5, eMMC, NAND, NOR) Precision sensors and actuators Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,500,000.00 per year
About the Role We are building the next generation of spatial intelligence where robots and 3D systems understand and interact with the world in real time. As a Multimodal LLM Engineer , you will design, train, and deploy vision-language models that understand detected objects, 3D environments, and dynamic scenes. Your work will enable robots and digital tools to reason about objects, context, safety, and actions—entirely on-device. You will collaborate closely with perception, robotics, and systems engineers to bring together 3D vision, object detection, and LLM reasoning into a unified real-time intelligence engine. This is a highly technical role with direct impact on core product capabilities. Responsibilities Develop and fine-tune multimodal LLMs (vision-language, 3D-language, object-context reasoning). Build pipelines that fuse object detection , 3D data , bounding boxes , and sensor inputs into LLM tokens. Architect models that interpret dynamic scenes, track changes, and deliver contextual reasoning. Implement region-based reasoning, spatial attention, temporal understanding, and affordance prediction. Train and optimize models using frameworks such as LLaVA, Qwen-VL, InternVL, CLIP/SigLIP, SAM, DETR, or custom backbones. Convert raw perception output into structured representations (scene graphs, spatial embeddings). Work with Robotics/Systems teams to integrate LLM reasoning into real-time pipelines (30–60 FPS). Develop scalable data pipelines for multimodal datasets (images, detections, 3D meshes, text descriptions). Perform model evaluation on context understanding, safety judgment, and action recommendation. Collaborate on model compression and deployment for edge devices (Rockchip, Jetson, Apple M-series). Minimum Qualifications MS or PhD in Computer Science, AI/ML, Robotics, or related field—or equivalent experience. 3+ years experience building deep learning models, including transformers. Hands-on experience with multimodal models (VLMs) or LLM fine-tuning. Strong understanding of one or more: Vision Transformers (ViT, SigLIP) CLIP-style contrastive models LLaVA / BLIP / Qwen-VL / InternVL DETR / SAM / YOLO / 3D perception networks Advanced Python and PyTorch skills. Experience training models with large datasets and distributed systems. Solid understanding of model architecture fundamentals (attention, tokenization, embeddings). Job Type: Full-time Pay: ₹1,500,000.00 - ₹3,000,000.00 per year