Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
40 - 50 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
4.0 years
40 - 50 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
5.0 years
4 Lacs
Ahmedabad
On-site
We are hiring a Senior Software Development Engineer for our platform. We are helping enterprises and service providers build their AI inference platforms for end users. As a Senior Software Engineer, you will take ownership of backend-heavy, full-stack feature development—building robust services, scalable APIs, and intuitive frontends that power the user experience. You’ll contribute to the core of our enterprise-grade AI platform, collaborating across teams to ensure our systems are performant, secure, and built to last. This is a high-impact, high-visibility role working at the intersection of AI infrastructure, enterprise software, and developer experience. Responsibilities: Design, develop and maintain databases, system APIs, system integrations, machine learning pipelines and web user interfaces. Scale algorithms designed by data scientists for deployment in high-performance environments. Develop and maintain continuous integration pipelines to deploy the systems. Design and implement scalable backend systems using Golang, C++, Go,Python. Model and manage data using relational (e.g., PostgreSQL , MySQL). Build frontend components and interfaces using TypeScript, and JavaScript when needed. Participate in system architecture discussions and contribute to design decisions. Write clean, idiomatic, and well-documented Go code following best practices and design patterns. Ensure high code quality through unit testing, automation, code reviews, and documentation Communicate technical concepts clearly to both technical and non-technical stakeholders. Qualifications and Criteria: 5–10 years of professional software engineering experience building enterprise-grade platforms. Deep proficiency in Golang , with real-world experience building production-grade systems. Solid knowledge of software architecture, design patterns, and clean code principles. Experience in high-level system design and building distributed systems. Expertise in Python and backend development with experience in PostgreSQL or similar databases. Hands-on experience with unit testing, integration testing, and TDD in Go. Strong debugging, profiling, and performance optimization skills. Excellent communication and collaboration skills. Hands-on experience with frontend development using JavaScript, TypeScript , and HTML/CSS. Bachelor's degree or equivalent experience in a quantitative field (Computer Science, Statistics, Applied Mathematics, Engineering, etc.). Skills: Understanding of optimisation, predictive modelling, machine learning, clustering and classification techniques, and algorithms. Fluency in a programming language (e.g. C++, Go, Python, JavaScript, TypeScript, SQL). Docker, Kubernetes, and Linux knowledge are an advantage. Experience using Git. Knowledge of continuous integration (e.g. Gitlab/Github). Basic familiarity with relational databases, preferably PostgreSQL. Strong grounding in applied mathematics. A firm understanding of and experience with the engineering approach. Ability to interact with other team members via code and design documents. Ability to work on multiple tasks simultaneously. Ability to work in high-pressure environments and meet deadlines. Compensation: Commensurate with experience Position Type: Full-time ( In House ) Location: Ahmedabad / Jamnagar Gujarat India. Submission Requirements CV All academic transcripts Submit to chintanit22@gmail.com , dipakberait@gmail.com with the name of the position you wish to apply for in the subject line. Job Type: Full-time Pay: From ₹40,000.00 per month Benefits: Paid sick time Location Type: In-person Schedule: Day shift Monday to Friday Experience: Full-stack development: 5 years (Preferred) Work Location: In person Speak with the employer +91 9904075544
Posted 5 days ago
4.0 years
40 - 50 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
WEKA is architecting a new approach to the enterprise data stack built for the age of reasoning. NeuralMesh by WEKA sets the standard for agentic AI data infrastructure with a cloud and AI-native software solution that can be deployed anywhere. It transforms legacy data silos into data pipelines that dramatically increase GPU utilization and make AI model training and inference, machine learning, and other compute-intensive workloads run faster, work more efficiently, and consume less energy. WEKA is a pre-IPO, growth-stage company on a hyper-growth trajectory. We’ve raised $375M in capital with dozens of world-class venture capital and strategic investors. We help the world’s largest and most innovative enterprises and research organizations, including 12 of the Fortune 50, achieve discoveries, insights, and business outcomes faster and more sustainably. We’re passionate about solving our customers’ most complex data challenges to accelerate intelligent innovation and business value. If you share our passion, we invite you to join us on this exciting journey. What You'll Be Doing As a Senior/Staff Kernel Engineer at Weka, your primary responsibility will be collaborating with other team members on our high-performance filesystem solution and releasing our kernel driver, which is written in C on top of Linux, as part of the Weka filesystem product. The kernel-based filesystem driver provides file access and logic to Weka filesystems and the ability to connect clients to the Weka cluster. This enables the Weka system to provide applications with local filesystem semantics and performance while providing centrally-managed, shareable, and resilient storage. Our entire kernel team proudly delivers high-quality kernel drivers and you will have the opportunity to quickly become an integral contributor. As a Senior Kernel Engineer, You’ll Design and develop core product features in a complex software system with a focus on Linux kernel and OS infrastructure layers Provide architectural guidance and fresh ideas in our core kernel driver and related interfaces Locate performance bottlenecks within Linux and/or its driver or other components, and suggest & implement enhancements to meet target performance goals Most importantly, assume nothing - constantly revisit how we work and whether our productivity is perfectly tuned Requirements 10+ years of hands-on experience in Linux kernel development and debugging Master of low-level C development in Linux kernel, with vast experience in performance-sensitive code and a solid understanding of the VFS, page cache & file system concepts Familiarity with kernel development methodologies and kernel structure, as well as experience in developing kernel modules Top-notch experience in the Linux Kernel Driver model & development Lock/lockless synchronization between kernel space & userspace Broad knowledge and understanding in Linux internals, kernel subsystems (Memory Management, IO, Storage, Networking), and kernel crash and core analysis skills Knowledge of IO tools and performance benchmarks using standard tools A deep understanding of threading and locking mechanisms A highly motivated and independent engineer with a positive attitude, a creative and open mind, and fluency in English. It's Nice If You Have A background in working with the Linux kernel community Experience contributing/up-streaming/maintaining kernel code Knowledge of storage subsystems, storage stack, and protocols (NVMe, NFS, Samba, filesystems), along with development experience with enterprise-grade storage solutions Experience hacking complex open-source projects The WEKA Way We are Accountable: We take full ownership, always–even when things don’t go as planned. We lead with integrity, show up with responsibility & ownership, and hold ourselves and each other to the highest standards. We are Brave: We question the status quo, push boundaries, and take smart risks when needed. We welcome challenges and embrace debates as opportunities for growth, turning courage into fuel for innovation. We are Collaborative: True collaboration isn’t only about working together. It’s about lifting one another up to succeed collectively. We are team-oriented and communicate with empathy and respect. We challenge each other and conduct positive conflict resolution. We are being transparent about our goals and results. And together, we’re unstoppable. We are Customer Centric: Our customers are at the heart of everything we do. We actively listen and prioritize the success of our customers, and every decision we make is driven by how we can better serve, support, and empower them to succeed. When our customers win, we win. Concerned that you don’t meet every qualification above? Studies have shown that women and people of color may be less likely to apply for jobs if they don’t meet every qualification specified. At WEKA, we are committed to building a diverse, inclusive and authentic workplace. If you are excited about this position but are concerned that your past work experience doesn’t match up perfectly with the job description, we encourage you to apply anyway – you may be just the right candidate for this or other roles at WEKA. WEKA is an equal opportunity employer that prohibits discrimination and harassment of any kind. We provide equal opportunities to all employees and applicants for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Posted 5 days ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Scientist/ML Scientist About Auxia Auxia is an AI-powered Growth and Personalization Platform that is reinventing how companies activate, engage, retain and monetize their customers. Auxia’s software delivers real-time personalization using ML that treats each customer as a unique individual, effectively creating a “cohort-of-one”. With Auxia, hundreds of personalized campaigns can be generated and deployed in minutes, doing away with static and tedious, rules-based segmentation and targeting. Auxia is easy to use, integrates with most major data and customer engagement tools, and frees up valuable internal resources. Our team started Auxia based on our collective expertise in advancing product growth at companies like Google, Meta, and Lyft. We saw a paradigm shift unfolding where the most successful teams drove significant revenue improvements and cost reductions by shifting from static, rules-based segmentation and targeting to real-time decision-making driven by machine learning. About the role As the core member of the Auxia Data Science team, you will play a significant role in driving the data science vision/roadmap and also in building and improving our Machine Learning models and Agentic AI products. You will perform research at the intersection of recommender systems, causal inference, transformers, foundational models (LLMs), content understanding and reinforcement learning and develop production grade models serving decisions to 10s of millions of users per day. You will work at the intersection of artificial intelligence, process automation, and workflow optimization to create intelligent agents that can understand objectives, make decisions, and adapt to changing requirements in real-time. You will also be responsible for performing in-depth analysis of customer data and derive insights to better understand customer problems and collaborate cross functionally across engineering, product and business teams. Responsibilities Design, implement, and research machine learning algorithms to solve growth and personalization problems and continuously fine-tune and improve them Design and develop AI-driven autonomous agents to execute complex workflows across various applications and objectives Implement and improve offline model evaluation methods Analyze large business datasets to extract knowledge and communicate insights to critical business questions by leveraging traditional statistical or machine learning methodologies Present and communicate data-based insights and recommendations to product, growth and data science teams at Auxia’s customers Stay updated on ML research through literature reviews, conferences, and networking Qualifications Master’s or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field. 7+ years experience in training and improving and deploying ML models, developing production software systems such as data pipelines or dashboards preferably at fast-growing technology companies Proficiency in ML frameworks (TensorFlow/PyTorch/Jax) and the Python data science ecosystem (Numpy, SciPy, Pandas, etc.) Experience with LLM, RAG, information retrieval etc in building production grade solutions Experience in recommender systems, feed ranking and optimization Experience in causal inference techniques and experimental design Experience working with Cloud services consuming and managing large scale data sets Thrives in ambiguity, ownership mindset, and willing to ‘get your hands dirty’ outside of the scope of your role (e.g. Product vision, strategic roadmap, engineering etc)
Posted 5 days ago
4.0 years
40 - 50 Lacs
Greater Delhi Area
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
4.0 years
40 - 50 Lacs
Faridabad, Haryana, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
4.0 years
40 - 50 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Software Architect Location: Sector 63, Gurgaon (On‑site) Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 8–12 years in scalable software design, with significant ownership of cloud‑native, data‑intensive, or AI‑driven products Apply: careers@darwix.ai Subject Line: Application – Software Architect – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform transforming how enterprise sales, service, and field teams operate across India, MENA, and Southeast Asia. Our products— Transform+ , Sherpa.ai , and Store Intel —deliver multilingual speech‑to‑text pipelines, live agent coaching, behavioural scoring, and computer‑vision insights for brands such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA. Backed by leading VCs and built by alumni from IIT, IIM, and BITS, we are scaling rapidly and require a robust architectural foundation for continued global growth. Role Overview The Software Architect will own the end‑to‑end technical architecture of Darwix AI’s platform—spanning real‑time voice processing, generative‑AI services, analytics dashboards, and enterprise integrations. You will define reference architectures, lead critical design reviews, and partner with engineering, AI research, DevOps, and product teams to ensure the platform meets stringent requirements on scalability, latency, security, and maintainability. Key ResponsibilitiesArchitectural Ownership Define and evolve the platform architecture covering microservices, API gateways, data pipelines, real‑time streaming, and storage strategies. Establish design patterns for integrating LLMs, speech‑to‑text engines, vector databases, and retrieval‑augmented generation pipelines. Create and maintain architecture artefacts—logical and physical diagrams, interface contracts, data flow maps, threat models. Scalability & Reliability Specify non‑functional requirements (latency, throughput, availability, observability) and drive their implementation. Guide decisions on sharding, caching, queueing, and auto‑scaling to handle spikes in concurrent calls and AI inference workloads. Collaborate with DevOps on HA/DR strategies, cost‑optimised cloud deployments, and CI/CD best practices. Technical Leadership Lead design reviews, code reviews, and proof‑of‑concepts for complex modules (speech pipelines, RAG services, dashboard analytics). Mentor senior and mid‑level engineers on clean architecture, domain‑driven design, and testability. Evaluate new tools, frameworks, and open‑source components; build decision matrices for adoption. Security & Compliance Set architectural guardrails for authentication, authorisation, encryption, and data residency. Support infosec questionnaires and client audits by providing architecture and data‑flow evidence. Ensure alignment with industry standards (SOC 2, GDPR where applicable) in design and implementation. Cross‑Functional Collaboration Work with Product and AI leadership to translate business requirements into well‑scoped, feasible technical solutions. Engage with customer‑facing solution architects to map client environments to Darwix AI components. Drive architectural alignment across multiple engineering pods to avoid duplication and technical debt. Required Skills & Qualifications 8–12 years in backend or full‑stack engineering, with at least 3 years in an architecture or principal engineer role. Deep expertise in Python/Node.js , microservices, REST/gRPC APIs, and event‑driven architectures (Kafka/Redis Streams). Strong knowledge of cloud platforms (AWS or GCP), container orchestration (Docker/Kubernetes), and IaC tools. Experience designing data platforms with PostgreSQL, MongoDB, Redis, S3 , and vector databases (FAISS/Pinecone). Proven ability to optimise for high‑concurrency, low‑latency audio or data‑streaming workloads. Demonstrated track record of guiding teams through major refactors, migrations, or greenfield platform builds. Preferred Qualifications Familiarity with speech processing stacks (Whisper, Deepgram), LLM orchestration (LangChain), and GPU inference serving. Exposure to enterprise integrations with CRMs (Salesforce, Zoho), telephony (Twilio, Exotel), and messaging APIs (WhatsApp). Prior experience in a high‑growth SaaS or AI startup serving international enterprise clients. Bachelor’s or Master’s degree in Computer Science or related discipline from a Tier 1 institution. Success Metrics (First 12 Months) Architectural blueprints ratified and adopted across all engineering squads. Achieve target latency and uptime SLAs (≥ 99.99 %) for real‑time AI services. Reduction of production incidents attributable to architectural debt or design gaps. Completion of at least one major scalability initiative (e.g., regional multitenancy, GPU inference pool, streaming upgrade). Positive feedback from engineering teams on clarity and usability of architectural guidelines. Who You Are A systems thinker who balances immediate product deadlines with long‑term platform health. A pragmatic technologist: you know when to refactor, when to extend, and when to build net‑new. A persuasive communicator comfortable explaining complex designs to engineers, product managers, and clients. A mentor and collaborator who raises the technical bar through example and feedback. Motivated by building resilient architectures that power real‑world AI products at scale. How to Apply Send your résumé and (optionally) architectural portfolio links to careers@darwix.ai Subject: Application – Software Architect – [Your Name] Join Darwix AI to architect the next generation of real‑time, multilingual conversational intelligence platforms and leave a lasting impact on how global enterprises drive revenue with AI.
Posted 6 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior AI Research Scientist Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 6–10 years in applied AI/ML research, with multiple publications or patents and demonstrable product impact Apply: careers@darwix.ai Subject Line: Application – Senior AI Research Scientist – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform that powers real‑time conversation intelligence, multilingual coaching, and behavioural analytics for large revenue and service teams. Our products— Transform+ , Sherpa.ai , and Store Intel —integrate speech‑to‑text, LLM‑driven analysis, real‑time nudging, and computer vision to improve performance across BFSI, real estate, retail, and healthcare enterprises such as IndiaMart, Wakefit, Bank Dofar, GIVA, and Sobha. Role Overview The Senior AI Research Scientist will own the end‑to‑end research agenda that advances Darwix AI’s core capabilities in speech, natural‑language understanding, and generative AI. You will design novel algorithms, convert them into deployable prototypes, and collaborate with engineering to ship production‑grade features that directly influence enterprise revenue outcomes. Key ResponsibilitiesResearch Leadership Formulate and drive a 12‑ to 24‑month research roadmap covering multilingual speech recognition, conversation summarisation, LLM prompt optimisation, retrieval‑augmented generation (RAG), and behavioural scoring. Publish internal white papers and, where strategic, peer‑reviewed papers or patents to establish technological leadership. Model Development & Prototyping Design and train advanced models (e.g., Whisper fine‑tunes, Conformer‑RNN hybrids, transformer‑based diarisation, LLM fine‑tuning with LoRA/QLoRA). Build rapid prototypes in PyTorch or TensorFlow; benchmark against latency, accuracy, and compute cost targets relevant to real‑time use cases. Production Transfer Work closely with backend and MLOps teams to convert research code into containerised, scalable inference micro‑services. Define evaluation harnesses (WER, BLEU, ROUGE, accuracy, latency) and automate regression tests before every release. Data Strategy Lead data‑curation efforts: multilingual audio corpora, domain‑specific fine‑tuning datasets, and synthetic data pipelines for low‑resource languages. Establish annotation guidelines, active‑learning loops, and data quality metrics. Cross‑Functional Collaboration Act as the principal technical advisor in customer POCs involving custom language models, domain‑specific ontologies, or privacy‑sensitive deployments. Mentor junior researchers and collaborate with product managers on feasibility assessments and success metrics for AI‑driven features. Required Qualifications 6–10 years of hands‑on research in ASR, NLP, or multimodal AI, including at least three years in a senior or lead capacity. Strong publication record (top conferences such as ACL, INTERSPEECH, NeurIPS, ICLR, EMNLP) or patents showing applied innovation. Expert‑level Python and deep‑learning fluency (PyTorch or TensorFlow); comfort with Hugging Face, OpenAI APIs, and distributed training. Proven experience delivering research outputs into production systems with measurable business impact. Solid grasp of advanced topics: sequence‑to‑sequence modelling, attention mechanisms, LLM alignment, speaker diarisation, vector search, on‑device optimisation. Preferred Qualifications Experience with Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with GPU orchestration, Triton inference servers, TorchServe, or ONNX runtime optimisation. Prior work on enterprise call‑centre datasets, sales enablement analytics, or real‑time speech pipelines. Doctorate (PhD) in Computer Science, Electrical Engineering, or a closely related field from a Tier 1 institution. Success Metrics Reduction of transcription error rate and/or inference latency by agreed percentage targets within 12 months. Successful deployment of at least two novel AI modules into production with adoption across Tier‑1 client accounts. Internal citation and reuse of developed components in other product lines. Peer‑recognised technical leadership through mentoring, documentation, and knowledge sharing. Application Process Send your résumé (and publication list, if separate) to careers@darwix.ai with the subject line indicated above. Optionally, include a one‑page summary of a research project you transitioned from lab to production, detailing the problem, approach, and measured impact. Joining Darwix AI as a Senior AI Research Scientist means shaping the next generation of real‑time, multilingual conversational intelligence for enterprise revenue teams worldwide. If you are passionate about applied research that moves the business needle, we look forward to hearing from you.
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]”
Posted 6 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Founding AI Engineer Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 4 – 8 years of hands‑on AI/ML engineering in production environments Apply: careers@darwix.ai Subject Line: Application – Founding AI Engineer – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform transforming how enterprise revenue and service teams operate. Our products— Transform+ , Sherpa.ai , and Store Intel —deliver multilingual speech‑to‑text, live coaching nudges, behavioural scoring, and computer‑vision insights for clients such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA. Backed by leading investors and built by IIT/IIM/BITS alumni, we are expanding rapidly across India, MENA, and Southeast Asia. Role Overview As the Founding AI Engineer , you will own the design, development, and deployment of Darwix AI’s core machine‑learning and generative‑AI systems from the ground up. You will work directly with the CTO and founders to convert ambitious product ideas into scalable, low‑latency services powering thousands of live customer interactions daily. This is a zero‑to‑one, high‑ownership role that shapes the technical backbone—and the culture—of our AI organisation. Key Responsibilities End‑to‑End Model Build & Deployment Architect, train, and fine‑tune multilingual speech‑to‑text, diarisation, NER, summarisation, and scoring models (Whisper, wav2vec 2.0, transformer‑based NLP). Design RAG pipelines and prompt‑engineering frameworks with commercial and open‑source LLMs (OpenAI, Mistral, Llama 2). Build GPU/CPU‑optimised inference micro‑services in Python/FastAPI with strict latency budgets. Production Engineering Implement asynchronous processing, message queues, caching, and load balancing for high‑concurrency voice and text streams. Establish CI/CD, model versioning, A/B testing, and automated rollback for ML APIs. Data Strategy & Tooling Define data‑collection, labelling, and active‑learning loops; build pipelines for continuous model improvement. Create evaluation harnesses (WER, ROUGE, AUROC, latency) and automate nightly regression tests. Security & Compliance Implement role‑based access, encryption‑at‑rest/in‑transit, and audit logging for all AI endpoints. Ensure adherence to enterprise infosec requirements and regional data‑privacy standards. Cross‑Functional Collaboration Partner with product managers to translate customer pain points into technical requirements and success metrics. Work with backend, DevOps, and frontend teams to expose AI outputs via dashboards, APIs, and real‑time agent assist overlays. Technical Leadership Establish coding standards, documentation templates, and peer‑review culture for the AI team. Mentor junior engineers as the team scales; influence hiring and tech‑stack decisions. Required Skills & Qualifications 4 – 8 years building and deploying ML systems in production (audio, NLP, or LLM focus). Expert‑level Python; strong grasp of PyTorch (or TensorFlow), Hugging Face Transformers, and data‑processing libraries. Proven record of optimising inference pipelines for sub‑second latency at scale. Hands‑on experience with cloud infrastructure (AWS or GCP), Docker/Kubernetes, and CI/CD for ML. Deep understanding of REST/gRPC APIs, security best practices, and high‑availability architectures. Ability to articulate trade‑offs and align technical decisions with business outcomes. Preferred Experience Prior work on Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with vector databases (Pinecone, FAISS), Redis Streams/Kafka, and GPU orchestration (Triton, TorchServe). Exposure to sales‑tech, call‑centre analytics, or real‑time coaching platforms. Contributions to open‑source AI projects or relevant peer‑reviewed publications. Success Metrics (First 12 Months) ≥ 25 % reduction in transcription error rate or latency across core languages. Two net‑new AI modules shipped to production and adopted by Tier‑1 clients. Robust CI/CD and monitoring pipelines in place with < 1 % model downtime. Documentation and onboarding playbooks enabling AI team headcount to double without quality loss. Who You Are A builder who takes ideas from whiteboard to production with minimal supervision. A systems thinker who balances algorithmic innovation with engineering pragmatism. A hands‑on leader who codes, mentors, and sets the technical bar through example. A product‑centric technologist who obsesses over user impact, not benchmark vanity. A lifelong learner who follows the bleeding edge of GenAI and applies it wisely. How to Apply Email your résumé to careers@darwix.ai with the subject line specified above. Optionally, include a brief note detailing an AI system you have designed and deployed, the challenges faced, and the measurable impact achieved. Joining Darwix AI as the Founding AI Engineer means taking ownership of the platform that will redefine how revenue teams worldwide leverage real‑time intelligence. If you are ready to build, scale, and lead at the frontier of GenAI, we look forward to hearing from you.
Posted 6 days ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Head of AI & ML Platforms Focus : Voice AI, NLP, Conversation Intelligence for Omnichannel Enterprise Sales Location : Sector 63, Gurugram, Haryana — Full-time, 100% In-Office Work Hours : 10:30 AM – 8:00 PM, Monday to Friday (2nd and 4th Saturdays off) Experience Required : 8–15 years in AI/ML, with 3+ years leading teams in voice, NLP, or conversation platforms Apply : careers@darwix.ai Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers contextual nudges, conversation scoring, and performance analytics in real time. Our suite of products includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant that provides in-the-moment coaching, summaries, and objection handling support Store Intel : A computer vision solution that transforms CCTV feeds into actionable insights for physical retail spaces Darwix AI is trusted by large enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty , and is backed by leading institutional and operator investors. We are expanding rapidly across India, the Middle East, and Southeast Asia. Role Overview We are seeking a highly experienced and technically strong Head of AI & ML Platforms to architect and lead the end-to-end AI systems powering our voice intelligence, NLP, and GenAI solutions. This is a leadership role that blends research depth with applied engineering execution. The ideal candidate will have deep experience in building and deploying voice-to-text pipelines, multilingual NLP systems, and production-grade inference workflows. The individual will be responsible for model design, accuracy benchmarking, latency optimization, infrastructure orchestration, and integration across our product suite. This is a critical leadership role with direct influence over product velocity, enterprise client outcomes, and future platform scalability. Key ResponsibilitiesVoice-to-Text (ASR) Architecture Lead the design and optimization of large-scale automatic speech recognition (ASR) pipelines using open-source and commercial frameworks (e.g., WhisperX, Deepgram, AWS Transcribe) Enhance speaker diarization, custom vocabulary accuracy, and latency performance for real-time streaming scenarios Build fallback ASR workflows for offline and batch mode processing Implement multilingual and domain-specific tuning, especially for Indian and GCC languages Natural Language Processing and Conversation Analysis Build NLP models for conversation segmentation, intent detection, tone/sentiment analysis, and call scoring Implement multilingual support (Hindi, Arabic, Tamil, etc.) with fallback strategies for mixed-language and dialectal inputs Develop robust algorithms for real-time classification of sales behaviors (e.g., probing, pitching, objection handling) Train and fine-tune transformer-based models (e.g., BERT, RoBERTa, DeBERTa) and sentence embedding models for text analytics GenAI and LLM Integration Design modular GenAI pipelines for nudging, summarization, and response generation using tools like LangChain, LlamaIndex, and OpenAI APIs Implement retrieval-augmented generation (RAG) architectures for contextual, accurate, and hallucination-resistant outputs Build prompt orchestration frameworks that support real-time sales coaching across channels Ensure safety, reliability, and performance of LLM-driven outputs across use cases Infrastructure and Deployment Lead the development of scalable, secure, and low-latency AI services deployed via FastAPI, TorchServe, or similar frameworks Oversee model versioning, monitoring, and retraining workflows using MLflow, DVC, or other MLOps tools Build hybrid inference systems for batch, real-time, and edge scenarios depending on product usage Optimize inference pipelines for GPU/CPU balance, resource scheduling, and runtime efficiency Team Leadership and Cross-functional Collaboration Recruit, manage, and mentor a team of machine learning engineers and research scientists Collaborate closely with Product, Engineering, and Customer Success to translate product requirements into AI features Own AI roadmap planning, sprint delivery, and KPI measurement Serve as the subject-matter expert for AI-related client discussions, sales demos, and enterprise implementation roadmaps Required Qualifications 8+ years of experience in AI/ML with a minimum of 3 years in voice AI, NLP, or conversational platforms Proven experience delivering production-grade ASR or NLP systems at scale Deep familiarity with Python, PyTorch, HuggingFace, FastAPI, and containerized environments (Docker/Kubernetes) Expertise in fine-tuning LLMs and building multi-language, multi-modal intelligence stacks Demonstrated experience with tools such as WhisperX, Deepgram, Azure Speech, LangChain, MLflow, or Triton Inference Server Experience deploying real-time or near real-time inference models at enterprise scale Strong architectural thinking with the ability to design modular, reusable, and scalable ML services Track record of building and leading high-performing ML teams Preferred Skills Background in telecom, contact center AI, conversational analytics, or field sales optimization Familiarity with GPU deployment, model quantization, and inference optimization Experience with low-resource languages and multilingual data augmentation Understanding of sales enablement workflows and domain-specific ontology development Experience integrating AI models into customer-facing SaaS dashboards and APIs Success Metrics Transcription accuracy improvement by ≥15% across core languages within 6 months End-to-end voice-to-nudge latency reduced below 5 seconds GenAI assistant adoption across 70%+ of eligible conversations AI-driven call scoring rolled out across 100% of Tier 1 clients within 9 months Model deployment velocity (dev to prod) reduced by ≥40% through tooling and process improvements Culture at Darwix AI At Darwix AI, we operate at the intersection of engineering velocity and product clarity. We move fast, prioritize outcomes over optics, and expect leaders to drive hands-on impact. You will work directly with the founding team and senior leaders across engineering, product, and GTM functions. Expect ownership, direct communication, and a culture that values builders who scale systems, people, and strategy. Compensation and Benefits Competitive fixed compensation Performance-based bonuses and growth-linked incentives ESOP eligibility for leadership candidates Access to GPU/compute credits and model experimentation infrastructure Comprehensive medical insurance and wellness programs Dedicated learning and development budget for technical and leadership upskilling MacBook Pro, premium workstation, and access to industry tooling licenses Career Progression 12-month roadmap: Build and stabilize AI platform across all product lines 18–24-month horizon: Elevate to VP of AI or Chief AI Officer as platform scale increases globally Future leadership role in enabling new verticals (e.g., healthcare, finance, logistics) with domain-specific GenAI solutions How to Apply Send the following to careers@darwix.ai : Updated CV (PDF format) A short statement (200 words max) on: “How would you design a multilingual voice-to-text pipeline optimized for low-resource Indic languages, with real-time nudge delivery?” Links to any relevant GitHub repos, publications, or deployed projects (optional) Subject Line : “Application – Head of AI & ML Platforms – [Your Name]”
Posted 6 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Instead Instead is a tax platform designed to help taxpayers and tax professionals collaborate to save money on taxes. As the first company in decades to receive IRS approvals to E-file 1040, 1041, 1120, 1120S, and 1065 — we're re-inventing a complex category. Founded in 2023, Instead combines LLMs with tax law to make tax management a continuous, proactive process rather than a dreaded annual deadline. Instead's investors include Sarah Guo from Conviction (conviction.com), IRIS (irisglobal.com) the largest tax software provider in the UK and many of our partners and customers who believe in our mission and vision. The Instead team comprises talented leaders from leading tax, financial services and fintech companies — Gusto, Intuit, Zenefits, Thomson Reuters, Wolters Kluwer — as well as top tax & accounting firms such as PwC, BDO, RSM, and KPMG. Instead was a 2024 Innovation Award Finalist in CPA Practice Advisor. Instead's CEO, Andrew Argue, is a CPA and has been named Top 100 Most Influential People in the Accounting Profession twice - Ones To Watch and CPA Practice Advisors 20 under 40. About the role In this role, you will get the chance to build innovative AI-powered tax software alongside a dynamic team. You'll work on both our customer-facing product Instead (instead.com) and our backend internal tooling that empowers our teams to build cutting-edge features. This role is at the forefront of leveraging AI to drive innovation across our platform, working on exciting high-value features for our customers. If you're passionate about working on real production use cases utilizing LLMs and want to contribute to groundbreaking AI applications in a fast-paced environment, you'll thrive as you help us drive innovation in tax technology. Here is the tech you will get the change to utilize: Front End: Vue, Nuxt, TypeScript, Tailwind CSS Back End: Go, Docker on AWS ECS/Fargate, PostgreSQL on AWS RDS AI: Cursor with Sonnet 3.5, Langchain, Helicone, Gemini, OpenAI's API utilizing 4o (still exploring best use cases for o1 pro) and any other piece of AI tech that we can get our hands on What you'll do Ship full-stack product features end to end for our live customer platform Make key product improvements based on customer feedback and usage analytics Understand and solve critical product bugs across the full stack Build and integrate components for infrastructure, supporting production-level inference and advanced prompt engineering Create tools and internal platforms to enhance the productivity and capabilities of Instead's teams Additional projects as needed by the internal engineering team and US-based product teams What you'll need Proficiency in full-stack development, with a strong understanding of web frameworks, backend systems, and cloud infrastructure Experience building backend systems and infrastructure that can support live products 3+ years of software development experience High attention to detail Fast learner who enjoys working in a fast-paced, innovative environment Nice to have A track record of working on full-stack projects, end-to-end Experience with AI/ML frameworks and prompt engineering Experience programming using AI copilots such as Cursor, GitHub Copilot, ChatGPT, Claude, Windsurf, etc. Experience with the technologies in our current stack Experience building customer-facing products at scale Why join us Ability to work with cutting edge AI in all stages of the software development lifecycle Work on a cutting-edge tax tech platform that's transforming the industry Be part of a collaborative, mission-driven team Competitive compensation and benefits Growth opportunities in product development, compliance, and technology Opportunity to work with cutting-edge AI technology in production environments Equal Opportunity Employer - M/F/D/V As a global business, we rely on diversity of culture and thought to deliver on our goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. We trust our team with sensitive information, so all candidates who receive and accept employment offers must complete a background check before joining us.
Posted 6 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Principal Software Engineer – AI Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 6–10 years of hands-on development in AI/ML systems, with deep experience in shipping production-grade AI products Apply at : careers@darwix.ai Subject Line : Application – Principal Software Engineer – AI – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how large sales and CX teams operate across India, MENA, and Southeast Asia. We build deeply integrated conversational intelligence and agent assist tools that enable: Multilingual speech-to-text pipelines Real-time agent coaching AI-powered sales scoring Predictive analytics and nudges CRM and telephony integrations Our clients include leading enterprises like IndiaMart, Bank Dofar, Wakefit, GIVA, and Sobha , and our product is deeply embedded in the daily workflows of field agents, telecallers, and enterprise sales teams. We are backed by top VCs and built by alumni from IIT, IIM, and BITS with deep expertise in real-time AI, enterprise SaaS, and automation. Role Overview We are hiring a Principal Software Engineer – AI to lead the development of advanced AI features in our conversational intelligence suite. This is a high-ownership role that combines software engineering, system design, and AI/ML application delivery. You will work across our GenAI stack—including Whisper, LangChain, LLMs, audio streaming, transcript processing, NLP pipelines, and scoring models—to build robust, scalable, and low-latency AI modules that power real-time user experiences. This is not a research role. You will be building, deploying, and optimizing production-grade AI features used daily by thousands of sales agents and managers across industries. Key Responsibilities 1. AI System Architecture & Development Design, build, and optimize core AI modules such as: Multilingual speech-to-text (Whisper, Deepgram, Google STT) Prompt-based LLM workflows (OpenAI, open-source LLMs) Transcript post-processing: punctuation, speaker diarization, timestamping Real-time trigger logic for call nudges and scoring Build resilient pipelines using Python, FastAPI, Redis, Kafka , and vector databases 2. Production-Grade Deployment Implement GPU/CPU-optimized inference services for latency-sensitive workflows Use caching, batching, asynchronous processing, and message queues to scale real-time use cases Monitor system health, fallback workflows, and logging for ML APIs in live environments 3. ML Workflow Engineering Work with Head of AI to fine-tune, benchmark, and deploy custom models for: Call scoring (tone, compliance, product pitch) Intent recognition and sentiment classification Text summarization and cue generation Build modular services to plug models into end-to-end workflows 4. Integrations with Product Modules Collaborate with frontend, dashboard, and platform teams to serve AI output to users Ensure transcript mapping, trigger visualization, and scoring feedback appear in real-time in the UI Build APIs and event triggers to interface AI systems with CRMs, telephony, WhatsApp, and analytics modules 5. Performance Tuning & Optimization Profile latency and throughput of AI modules under production loads Implement GPU-aware batching, model distillation, or quantization where required Define and track key performance metrics (latency, accuracy, dropout rates) 6. Tech Leadership Mentor junior engineers and review AI system architecture, code, and deployment pipelines Set engineering standards and documentation practices for AI workflows Contribute to planning, retrospectives, and roadmap prioritization What We’re Looking For Technical Skills 6–10 years of backend or AI-focused engineering experience in fast-paced product environments Strong Python fundamentals with experience in FastAPI, Flask , or similar frameworks Proficiency in PyTorch , Transformers , and OpenAI API/LangChain Deep understanding of speech/text pipelines, NLP, and real-time inference Experience deploying LLMs and AI models in production at scale Comfort with PostgreSQL, MongoDB, Redis, Kafka, S3 , and Docker/Kubernetes System Design Experience Ability to design and deploy distributed AI microservices Proven track record of latency optimization, throughput scaling, and high-availability setups Familiarity with GPU orchestration, containerization, CI/CD (GitHub Actions/Jenkins), and monitoring tools Bonus Skills Experience working with multilingual STT models and Indic languages Knowledge of Hugging Face, Weaviate, Pinecone, or vector search infrastructure Prior work on conversational AI, recommendation engines, or real-time coaching systems Exposure to sales/CX intelligence platforms or enterprise B2B SaaS Who You Are A pragmatic builder—you don’t chase perfection but deliver what scales A systems thinker—you see across data flows, bottlenecks, and trade-offs A hands-on leader—you mentor while still writing meaningful code A performance optimizer—you love shaving off latency and memory bottlenecks A product-focused technologist—you think about UX, edge cases, and real-world impact What You’ll Impact Every nudge shown to a sales agent during a live customer call Every transcript that powers a manager’s coaching decision Every scorecard that enables better hiring and training at scale Every dashboard that shows what drives revenue growth for CXOs This role puts you at the intersection of AI, revenue, and impact —what you build is used daily by teams closing millions in sales across India and the Middle East. How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Principal Software Engineer – AI – [Your Name] (Optional): Include a brief note describing one AI system you've built for production—what problem it solved, what stack it used, and what challenges you overcame. If you're ready to lead the AI backbone of enterprise sales , build world-class systems, and drive real-time intelligence at scale— Darwix AI is where you belong.
Posted 6 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.
Posted 6 days ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Vice President – Engineering Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 10–15 years in full-stack engineering, with at least 4+ years in a senior leadership role in SaaS/AI platforms Apply at : careers@darwix.ai Subject Line : Application – VP Engineering – [Your Name] About Darwix AI Darwix AI is one of India’s fastest-growing GenAI SaaS platforms, transforming enterprise revenue teams with real-time conversational intelligence and AI-powered sales enablement tools. Our core product suite powers: Multilingual speech-to-text Real-time agent coaching & nudges Automated scoring of sales calls CRM/LOS integrations Analytics dashboards and more Our clients include leading brands in BFSI, real estate, retail, and healthcare , such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA . We operate across India, MENA, and Southeast Asia, and are backed by marquee investors, industry CXOs, and IIT/IIM/BITS alumni. As we scale to become the backbone of revenue intelligence globally, engineering excellence is the cornerstone of our journey. Role Overview We are looking for a visionary and hands-on Vice President – Engineering who will take end-to-end ownership of our technology stack, engineering roadmap, and team. You will be responsible for architecting scalable GenAI-first systems , driving high-velocity product shipping, ensuring enterprise-grade reliability, and building a high-performing engineering culture. This role requires strategic thinking, hands-on execution, deep experience with AI/ML or data-intensive platforms, and the ability to attract and retain top-tier engineering talent. You will work closely with the CEO, Head of AI/ML, Product, and Founder's Office to deliver robust, secure, and intelligent experiences across web and real-time systems. Key Responsibilities 1. Technology Leadership Define and drive the technology vision, architecture, and roadmap across AI, platform, data, and frontend systems Lead system architecture design for speech processing, real-time audio streaming, LLM-based agents, and multilingual interfaces Ensure engineering best practices: microservices design, scalability, uptime SLAs, and test automation 2. Team Building & People Leadership Build, mentor, and scale a high-performance engineering team including backend, frontend, AI, DevOps, QA, and integrations Instill a culture of ownership, fast iteration, performance reviews, and deep collaboration across teams Define career paths, recruit top-tier talent, and drive technical onboarding and coaching 3. Product Execution & Delivery Work with Product, Founders, and Sales Engineering to translate business needs into scalable technical implementations Own timelines, sprint planning, delivery schedules, and development velocity metrics Ensure security, latency, fault tolerance, and compliance across all releases 4. Platform & Infra Ownership Oversee cloud infrastructure on AWS (or GCP), containerization, CI/CD pipelines, observability, and cost optimization Own DevOps, uptime, disaster recovery, and horizontal scaling for high concurrency needs Manage data engineering and database architecture across PostgreSQL, MongoDB, S3, Redis, and vector DBs 5. AI/ML & GenAI Integration Work closely with Head of AI/ML on integrating Whisper, LangChain, custom RAG pipelines, and multilingual inference stacks into production systems Translate AI/ML research into performant APIs, stream pipelines, and real-time inference microservices Ensure optimized GPU/CPU utilization, caching, and latency control on real-time AI modules 6. Client-Facing & Sales Support Support enterprise pre-sales and post-sales engineering for large deals (in BFSI, real estate, etc.) Handle security reviews, solution architecture walkthroughs, and platform deep dives with client CTO teams Create system documentation, architecture diagrams, and technical solution notes Qualifications & Skills Must-Have 10–15 years of total experience in software engineering, with at least 4–5 years in senior leadership Proven experience scaling a B2B SaaS platform across geographies Deep backend expertise in Python, Node.js, PostgreSQL, MongoDB , microservices Strong understanding of cloud-native development (AWS/GCP), DevOps, CI/CD , and real-time systems Experience managing large-scale, concurrent user bases and low-latency applications Strong understanding of security protocols, enterprise authentication (SAML/OAuth), and compliance Good to Have Exposure to speech-to-text , audio processing, NLP, or GenAI-based inference systems Experience working with AI frameworks (Torch, HuggingFace, LangChain) and GPU inference optimization Familiarity with Flutter , Angular/React , or mobile deployment is a plus Hands-on experience with vector databases, Redis, WebSockets , and scalable caching mechanisms Who You Are You’ve built high-growth engineering teams from scratch or scaled them through 10x growth phases You’ve architected and shipped world-class SaaS systems, preferably with real-time or AI components You balance technical depth with strategic business alignment You lead by influence, feedback, and deep respect—not just authority You’re obsessed with metrics, uptime, shipping velocity, and end-user outcomes You’re energized by fast-paced environments, ambiguous challenges, and clear impact What Success Looks Like in 12 Months A rock-solid engineering team shipping weekly with full CI/CD and 99.99% uptime Near real-time AI inference systems powering multilingual speech and coaching Engineering velocity has doubled, without sacrificing stability Architecture is mature, modular, secure, and scalable across geographies You’ve built strong tech documentation and institutional knowledge systems Product features are built fast, secure, and adopted well by enterprise clients Engineering culture is resilient, hungry, and ready for global scale How to Apply Send your resume to careers@darwix.ai Subject: Application – VP Engineering – [Your Name] (Optional): Include a brief note outlining a high-impact engineering initiative you’ve led and the measurable impact it created. This is not just an engineering leadership role. It’s a rare opportunity to build a generational AI company from the ground up —as the tech backbone for sales teams globally. If you’ve built systems that scale, teams that thrive, and products that change behavior— we’d love to speak with you.
Posted 6 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Plus Points Experience working on audio processing , speech-to-text (STT) pipelines, or RAG architectures Hands-on with vector databases or LangChain , LangGraph Exposure to real-time systems, WebSockets, and stream processing Basic understanding of frontend integration workflows (e.g., with HTML/CSS/JS interfaces)
Posted 6 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 6 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 6 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering in the enterprise software industry, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). OCI is currently building a future-ready Gen2 cloud Data Science service platform. At the core of this platform, lies Cloud AI Cloud Service. What OCI AI Cloud Services are: A set of services on the public cloud, that are powered by ML and AI to meet the Enterprise modernization needs, and that work out of the box. These services and models can be easily specialized for specific customers/domains by demonstrating existing OCI services. Key Points: Enables customers to add AI capabilities to their Apps and Workflows easily via APIs or Containers, Useable without needing to build AI expertise in-house and Covers key gaps – Decision Support, NLP, for Public Clouds and Enterprise in NLU, NLP, Vision and Conversational AI. You’re Opportunity: As we innovate to provide a single collaborative ML environment for data-science professionals, we will be extremely happy to have you join us and share the very future of our Machine Learning platform - by building an AI Cloud service. We are addressing exciting challenges at the intersection of artificial intelligence and innovative cloud infrastructure. We are building cloud services in Computer vision for Image/Video and Document Analysis, Decision Support (Anomaly Detection, Time series forecasting, Fraud detection, Content moderation, Risk prevention, predictive analytics), Natural Language Processing (NLP), and, Speech that works out of the box for enterprises. Our product vision includes the ability for enterprises to be able to customize the services for their business and train them to specialize in their data by creating micro models that enhance the global AI models. What You’ll Do Develop scalable infrastructure, including microservices and a backend, that automates training, deployment, and optimization of ML model inference. Building a core of Artificial Intelligence and AI services such as Vision, Speech, Language, Decision, and others. Brainstorm and design various POCs using AI Perpetual AI Services for new or existing enterprise problems. Collaborate with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively communicating your needs, understanding theirs, and addressing external and internal shareholder product challenges. Lead research and development efforts to explore new tools, frameworks, and methodologies to improve backend development processes. Experiment with ML models in Python/C++ using machine learning libraries (Pytorch, ONNX, TensorRT, Triton, TensorFlow, Jax), etc. Leverage Cloud technology – Oracle Cloud (OCI), AWS, GCP, Azure, or similar technology. Qualifications Master’s degree or equivalent experience (preferred) in computer science, Statistics or Mathematics, artificial intelligence, machine learning, Computer vision, operations research, or related technical field. 3+ years for PhD or equivalent experience, 5+ years for Masters, or demonstrated ability designing, implementing, and deploying machine learning models in production environments. Practical experience in design, implementation, and production deployment of distributed systems using microservices architecture and APIs using common frameworks like Spring Boot (Java), etc. Practical experience working in a cloud environment: Oracle Cloud (OCI), AWS, GCP, Azure, and containerization (Docker, Kubernetes). Working knowledge of current techniques, approaches, and inference optimization strategies in machine learning models. Experience with performance tuning, scalability, and load balancing techniques. Expert in at least one high-level language such as Java/C++ (Java preferred). Expert in at least one scripting language such as Python, Javascript, and Shell . Deep understanding of data structures, and algorithms, and excellent problem-solving skills. Experience or willingness to learn and work in Agile and iterative development and DevOps processes. Strong drive to learn and master new technologies and techniques. You enjoy a fast-paced work environment. Additional Preferred Qualifications Experience with Cloud Native Frameworks tools and products is a plus Experience in Computer vision tasks like Image Classification, Object Detection, Segmentation, Text detection & recognition, Information extraction from documents, etc. Having an impressive set of GitHub projects or contributions to open-source technologies is a plus Hands-on experience with horizontally scalable data stores such as Hadoop and other NoSQL technologies like Cassandra is a plus. Our vision is to provide an immersive AI experience on Oracle Cloud. Aggressive as it might sound, our growth journey is fueled by highly energetic, technology-savvy engineers like YOU who are looking to grow with us to meet the demands of building a powerful next-generation platform. Are you ready to do something big? Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 6 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 6 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough