Jobs
Interviews

4056 Fastapi Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it.In March 2022, we became India's fastest fin tech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises. So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore.Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story Key Responsibilities: Develop and maintain APIs using Node.js and FastAPI for bot orchestration and deployment. Implement and optimize CI/CD pipelines (GitHub, Docker, AWS ECR) for automated bot deployments. Manage and operate Kubernetes clusters (AWS EKS or K3S) for voice bot hosting and scaling. Integrate voice bots with ASR, TTS, and telephony systems (e.g., AWS Connect). Implement real-time monitoring and alerting for bot performance, latency, and system health. Collaborate on ensuring high availability and fault tolerance for 10M+ daily user interactions. Work with the Campaign Management Platform to schedule and execute outbound voice campaigns. Required Qualifications: 4-5 years of experience in MLOps, DevOps, or a related software engineering role. Strong proficiency in Node.js and Python (FastAPI) for backend development. Strong proficiency in Redis, NATS, Dragonfly, context and cached management. Strong proficiency in MongoDB, SQLite or any database. Solid experience with Docker and containerization. Hands-on experience with Kubernetes (EKS or K3S) for deployment and operations. Practical experience with AWS cloud services (EC2, EKS, ECR, S3, CloudWatch). Experience with CI/CD pipelines. Preferred Qualifications (Added Advantages): Exposure to frontend development (e.g., React). Familiarity with Voice AI Architecture (ASR/TTS/LLM) or telephony systems. Experience with LLM serving frameworks Exposure to campaign management or outbound dialer systems.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Principal Software Engineer – AI Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 6–10 years of hands-on development in AI/ML systems, with deep experience in shipping production-grade AI products Apply at : careers@darwix.ai Subject Line : Application – Principal Software Engineer – AI – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how large sales and CX teams operate across India, MENA, and Southeast Asia. We build deeply integrated conversational intelligence and agent assist tools that enable: Multilingual speech-to-text pipelines Real-time agent coaching AI-powered sales scoring Predictive analytics and nudges CRM and telephony integrations Our clients include leading enterprises like IndiaMart, Bank Dofar, Wakefit, GIVA, and Sobha , and our product is deeply embedded in the daily workflows of field agents, telecallers, and enterprise sales teams. We are backed by top VCs and built by alumni from IIT, IIM, and BITS with deep expertise in real-time AI, enterprise SaaS, and automation. Role Overview We are hiring a Principal Software Engineer – AI to lead the development of advanced AI features in our conversational intelligence suite. This is a high-ownership role that combines software engineering, system design, and AI/ML application delivery. You will work across our GenAI stack—including Whisper, LangChain, LLMs, audio streaming, transcript processing, NLP pipelines, and scoring models—to build robust, scalable, and low-latency AI modules that power real-time user experiences. This is not a research role. You will be building, deploying, and optimizing production-grade AI features used daily by thousands of sales agents and managers across industries. Key Responsibilities 1. AI System Architecture & Development Design, build, and optimize core AI modules such as: Multilingual speech-to-text (Whisper, Deepgram, Google STT) Prompt-based LLM workflows (OpenAI, open-source LLMs) Transcript post-processing: punctuation, speaker diarization, timestamping Real-time trigger logic for call nudges and scoring Build resilient pipelines using Python, FastAPI, Redis, Kafka , and vector databases 2. Production-Grade Deployment Implement GPU/CPU-optimized inference services for latency-sensitive workflows Use caching, batching, asynchronous processing, and message queues to scale real-time use cases Monitor system health, fallback workflows, and logging for ML APIs in live environments 3. ML Workflow Engineering Work with Head of AI to fine-tune, benchmark, and deploy custom models for: Call scoring (tone, compliance, product pitch) Intent recognition and sentiment classification Text summarization and cue generation Build modular services to plug models into end-to-end workflows 4. Integrations with Product Modules Collaborate with frontend, dashboard, and platform teams to serve AI output to users Ensure transcript mapping, trigger visualization, and scoring feedback appear in real-time in the UI Build APIs and event triggers to interface AI systems with CRMs, telephony, WhatsApp, and analytics modules 5. Performance Tuning & Optimization Profile latency and throughput of AI modules under production loads Implement GPU-aware batching, model distillation, or quantization where required Define and track key performance metrics (latency, accuracy, dropout rates) 6. Tech Leadership Mentor junior engineers and review AI system architecture, code, and deployment pipelines Set engineering standards and documentation practices for AI workflows Contribute to planning, retrospectives, and roadmap prioritization What We’re Looking For Technical Skills 6–10 years of backend or AI-focused engineering experience in fast-paced product environments Strong Python fundamentals with experience in FastAPI, Flask , or similar frameworks Proficiency in PyTorch , Transformers , and OpenAI API/LangChain Deep understanding of speech/text pipelines, NLP, and real-time inference Experience deploying LLMs and AI models in production at scale Comfort with PostgreSQL, MongoDB, Redis, Kafka, S3 , and Docker/Kubernetes System Design Experience Ability to design and deploy distributed AI microservices Proven track record of latency optimization, throughput scaling, and high-availability setups Familiarity with GPU orchestration, containerization, CI/CD (GitHub Actions/Jenkins), and monitoring tools Bonus Skills Experience working with multilingual STT models and Indic languages Knowledge of Hugging Face, Weaviate, Pinecone, or vector search infrastructure Prior work on conversational AI, recommendation engines, or real-time coaching systems Exposure to sales/CX intelligence platforms or enterprise B2B SaaS Who You Are A pragmatic builder—you don’t chase perfection but deliver what scales A systems thinker—you see across data flows, bottlenecks, and trade-offs A hands-on leader—you mentor while still writing meaningful code A performance optimizer—you love shaving off latency and memory bottlenecks A product-focused technologist—you think about UX, edge cases, and real-world impact What You’ll Impact Every nudge shown to a sales agent during a live customer call Every transcript that powers a manager’s coaching decision Every scorecard that enables better hiring and training at scale Every dashboard that shows what drives revenue growth for CXOs This role puts you at the intersection of AI, revenue, and impact —what you build is used daily by teams closing millions in sales across India and the Middle East. How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Principal Software Engineer – AI – [Your Name] (Optional): Include a brief note describing one AI system you've built for production—what problem it solved, what stack it used, and what challenges you overcame. If you're ready to lead the AI backbone of enterprise sales , build world-class systems, and drive real-time intelligence at scale— Darwix AI is where you belong.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

Remote

🧠 Job Title: Engineering Manager Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 7–12 Years Compensation: Competitive salary + ESOPs + Performance-based bonuses 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing AI-first startups, building next-gen conversational intelligence and real-time agent assist tools for sales teams globally. We’re transforming how enterprise sales happens across industries like BFSI, real estate, retail, and telecom with a GenAI-powered platform that combines multilingual transcription, NLP, real-time nudges, knowledge base integration, and performance analytics—all in one. Our clients include some of the biggest names in India, MENA, and SEA. We’re backed by marquee venture capitalists, 30+ angel investors, and operators from top AI, SaaS, and B2B companies. Our founding team comes from IITs, IIMs, BITS Pilani, and global enterprise AI firms. Now, we’re looking for a high-caliber Engineering Manager to help lead the next phase of our engineering evolution. If you’ve ever wanted to build and scale real-world AI systems for global use cases—this is your shot. 🎯 Role Overview As Engineering Manager at Darwix AI, you will be responsible for leading and managing a high-performing team of backend, frontend, and DevOps engineers. You will directly oversee the design, development, testing, and deployment of new features and system enhancements across Darwix’s AI-powered product suite. This is a hands-on technical leadership role , requiring the ability to code when needed, conduct architecture reviews, resolve blockers, and manage the overall engineering execution. You’ll work closely with product managers, data scientists, QA teams, and the founders to deliver on roadmap priorities with speed and precision. You’ll also be responsible for building team culture, mentoring developers, improving engineering processes, and helping the organization scale its tech platform and engineering capacity. 🔧 Key Responsibilities1. Team Leadership & Delivery Lead a team of 6–12 software engineers (across Python, PHP, frontend, and DevOps). Own sprint planning, execution, review, and release cycles. Ensure timely and high-quality delivery of key product features and platform improvements. Solve execution bottlenecks and ensure clarity across JIRA boards, product documentation, and sprint reviews. 2. Architecture & Technical Oversight Review and refine high-level and low-level designs proposed by the team. Provide guidance on scalable architectures, microservices design, performance tuning, and database optimization. Drive migration of legacy PHP code into scalable Python-based microservices. Maintain technical excellence across deployments, containerization, CI/CD, and codebase quality. 3. Hiring, Coaching & Career Development Own the hiring and onboarding process for engineers in your pod. Coach team members through 1:1s, OKRs, performance cycles, and continuous feedback. Foster a culture of ownership, transparency, and high-velocity delivery. 4. Process Design & Automation Drive adoption of agile development practices—daily stand-ups, retrospectives, sprint planning, documentation. Ensure production-grade observability, incident tracking, root cause analysis, and rollback strategies. Introduce quality metrics like test coverage, code review velocity, time-to-deploy, bug frequency, etc. 5. Cross-functional Collaboration Work closely with the product team to translate high-level product requirements into granular engineering plans. Liaise with QA, AI/ML, Data, and Infra teams to coordinate implementation across the board. Collaborate with customer success and client engineering for debugging and field escalations. 🔍 Technical Skills & Stack🔹 Primary Languages & Frameworks: Python (FastAPI, Flask, Django) PHP (legacy services; transitioning to Python) TypeScript, JavaScript, HTML5, CSS3 Mustache templates (preferred), React/Next.js (optional) 🔹 Databases & Storage: MySQL (primary), PostgreSQL MongoDB, Redis Vector DBs: Pinecone, FAISS, Weaviate (RAG pipelines) 🔹 AI/ML Integration: OpenAI APIs, Whisper, Wav2Vec, Deepgram Langchain, HuggingFace, LlamaIndex, LangGraph 🔹 DevOps & Infra: AWS EC2, S3, Lambda, CloudWatch Docker, GitHub Actions, Nginx Git (GitHub/GitLab), Jenkins (optional) 🔹 Monitoring & Testing: Prometheus, Grafana, Sentry PyTest, Selenium, Postman ✅ Candidate Profile👨‍💻 Experience: 7–12 years of total engineering experience in high-growth product companies or startups. At least 2 years of experience managing teams as a tech lead or engineering manager. Experience working on real-time data systems, microservices architecture, and SaaS platforms. 🎓 Education: Bachelor’s or Master’s degree in Computer Science or related field. Preferred background from Tier 1 institutions (IITs, BITS, NITs, IIITs). 💼 Traits We Love: You lead with clarity, ownership, and high attention to detail. You believe in building systems—not just shipping features. You are pragmatic and prioritize team delivery velocity over theoretical perfection. You obsess over latency, clean interfaces, and secure deployments. You want to build a high-performing tech org that scales globally. 🌟 What You’ll Get Leadership role in one of India’s top GenAI startups Competitive fixed compensation with performance bonuses Significant ESOPs tied to company milestones Transparent performance evaluation and promotion framework A high-speed environment where builders thrive Access to investor and client demos, roadshows, GTM huddles, and more Annual learning allowance and access to internal AI/ML bootcamps Founding-team-level visibility in engineering decisions and product innovation 🛠️ Projects You’ll Work On Real-time speech-to-text engine in 11 Indian languages AI-powered live nudges and agent assistance in B2B sales Conversation summarization and analytics for 100,000+ minutes/month Automated call scoring and custom AI model integration Multimodal input processing: audio, text, CRM, chat Custom knowledge graph integrations across BFSI, real estate, retail 📢 Why This Role Matters This is not just an Engineering Manager role. At Darwix AI, every engineering decision feeds directly into how real sales teams close deals. You’ll see your work powering real-time customer calls, nudging field reps in remote towns, helping CXOs make hiring decisions, and making a measurable impact on enterprise revenue. You’ll help shape the core technology platform of a company that’s redefining how humans and machines interact in sales. 📩 How to Apply Email your resume, GitHub/portfolio (if any), and a few lines on why this role excites you to: 📧 people@darwix.ai Subject: Application – Engineering Manager – [Your Name] If you’re a technical leader who thrives on velocity, takes pride in mentoring developers, and wants to ship mission-critical AI systems that power revenue growth across industries, this is your stage . Join Darwix AI. Let’s build something that lasts.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location: Gurgaon, Coimbatore, Hyderabad (Hybrid) Experience: 3–5 years About the Role We are seeking a skilled Python Backend Developer to join our team. The ideal candidate should have solid experience in building web applications and API microservices using Python frameworks. You will work on high-performance backend systems, follow engineering best practices, and collaborate with cross-functional teams to deliver scalable and secure applications. Key Responsibilities Design, develop, and maintain backend services and RESTful APIs . Write clean, maintainable, and efficient code following best practices. Work with large and complex codebases , ensuring scalability and performance. Collaborate with frontend and DevOps teams to ensure seamless integration. Follow Agile methodology and contribute to continuous improvement . Participate in code reviews, troubleshooting, and performance tuning. Required Skills & Experience 3–5 years of Python development experience focusing on web applications or services . Strong proficiency in Python 3.x with FastAPI or Flask frameworks. Hands-on experience with RDBMS (Postgres, Oracle, or similar) and ORMs . Proven experience in building API-based microservices . Good understanding of DevOps processes and CI/CD practices. Familiarity with Agile/Scrum methodology . Strong problem-solving and debugging skills. Good to Have Experience with GCP or other cloud platforms . Exposure to Kubernetes and containerized deployments .

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Lead Backend Developer (Python & Microservices) Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 6–10 years About Darwix AI Darwix AI is at the forefront of building the future of revenue enablement through a GenAI-powered conversational intelligence and real-time agent assist platform. Our mission is to empower global sales teams to close better, faster, and smarter by harnessing the transformative power of Generative AI, real-time speech recognition, multilingual insights, and next-gen sales analytics. Backed by top venture capitalists and industry leaders, Darwix AI is scaling rapidly across India, MENA, and US markets. With a leadership team from IIT, IIM, and BITS, we are building enterprise-grade SaaS solutions that are poised to redefine how organizations engage with customers. If you are looking for a role where your work directly powers mission-critical AI applications used globally, this is your moment. Role Overview We are seeking a Lead Backend Developer (Python & Microservices) to drive the architecture, scalability, and performance of our GenAI platform's core backend services. You will own the responsibility of designing, building, and leading backend systems that are real-time, distributed, and capable of supporting AI-powered applications at scale. You will mentor engineers, set technical direction, collaborate across AI, Product, and Frontend teams, and ensure that the backend infrastructure is robust, secure, and future-proof. This is a high-ownership, high-impact role for individuals who are passionate about building world-class systems that are production-ready, scalable, and designed for rapid innovation. Key Responsibilities🔹 Backend Architecture and Development Architect and lead the development of highly scalable, modular, and event-driven backend systems using Python. Build and maintain RESTful APIs and microservices that power real-time, multilingual conversational intelligence platforms. Design systems with a strong focus on scalability, fault tolerance, high availability, and security. Implement API gateways, service registries, authentication/authorization layers, and caching mechanisms. 🔹 Microservices Strategy Champion microservices best practices: service decomposition, asynchronous communication, event-driven workflows. Manage service orchestration, containerization, and scaling using Docker and Kubernetes (preferred). Implement robust service monitoring, logging, and alerting frameworks for proactive system health management. 🔹 Real-time Data Processing Build real-time data ingestion and processing pipelines using tools like Kafka, Redis streams, WebSockets. Integrate real-time speech-to-text (STT) engines and AI/NLP pipelines into backend flows. Optimize performance to achieve low-latency processing suitable for real-time agent assist experiences. 🔹 Database and Storage Management Design and optimize relational (PostgreSQL/MySQL) and non-relational (MongoDB, Redis) database systems. Implement data sharding, replication, and backup strategies for resilience and scalability. Integrate vector databases (FAISS, Pinecone, Chroma) to support AI retrieval and embedding-based search. 🔹 DevOps and Infrastructure Collaborate with DevOps teams to deploy scalable and reliable services on AWS (EC2, S3, Lambda, EKS). Implement CI/CD pipelines, containerization strategies, and blue-green deployment models. Ensure security compliance across all backend services (API security, encryption, RBAC). 🔹 Technical Leadership Mentor junior and mid-level backend engineers. Define and enforce coding standards, architectural patterns, and best practices. Conduct design reviews, code reviews, and ensure high engineering quality across the backend team. 🔹 Collaboration Work closely with AI scientists, Product Managers, Frontend Engineers, and Customer Success teams to deliver delightful product experiences. Translate business needs into technical requirements and backend system designs. Drive sprint planning, estimation, and delivery for backend engineering sprints. Core RequirementsTechnical Skills 6–10 years of hands-on backend engineering experience. Expert-level proficiency in Python . Strong experience building scalable REST APIs and microservices. Deep understanding of FastAPI (preferred) or Flask/Django frameworks. In-depth knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Experience with event-driven architectures: Kafka, RabbitMQ, Redis streams. Proficiency in containerization and orchestration: Docker, Kubernetes. Familiarity with real-time communication protocols: WebSockets, gRPC. Strong understanding of cloud platforms (AWS preferred) and serverless architectures. Good experience with DevOps tools: GitHub Actions, Jenkins, Terraform (optional). Bonus Skills Exposure to integrating AI/ML models (especially LLMs, STT, Diarization models) in backend systems. Familiarity with vector search databases and RAG-based architectures. Knowledge of GraphQL API development (optional). Experience in multilingual platform scaling (support for Indic languages is a plus). Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience working in product startups, SaaS platforms, AI-based systems, or high-growth technology companies. Proven track record of owning backend architecture at scale (millions of users or real-time systems). Strong understanding of software design principles (SOLID, DRY, KISS) and scalable system architecture. What You’ll Get Ownership : Lead backend engineering at one of India's fastest-growing GenAI startups. Impact : Build systems that directly power the world's next-generation enterprise sales platforms. Learning : Work with an elite founding team and top engineers from IIT, IIM, BITS, and top tech companies. Growth : Fast-track your career into senior technology leadership roles. Compensation : Competitive salary + ESOPs + performance bonuses. Culture : High-trust, high-ownership, no-bureaucracy environment focused on speed and innovation. Vision : Be a part of a once-in-a-decade opportunity building from India for the world. About the Tech Stack You’ll Work On Languages : Python 3.x Frameworks : FastAPI (Primary), Flask/Django (Secondary) Data Stores : PostgreSQL, MongoDB, Redis, FAISS, Pinecone Messaging Systems : Kafka, Redis Streams Cloud Platforms : AWS (EC2, S3, Lambda, EKS) DevOps : Docker, Kubernetes, GitHub Actions Others : WebSockets, OAuth 2.0, JWT, Microservices Patterns Application Process Submit your updated resume and GitHub/portfolio links (if available). Shortlisted candidates will have a technical discussion and coding assessment. Technical interview rounds covering system design, backend architecture, and problem-solving. Final leadership interaction round. Offer! How to Apply 📩 careers@darwix.ai Please include: Updated resume GitHub profile (optional but preferred) 2–3 lines about why you're excited to join Darwix AI as a Lead Backend Engineer Join Us at Darwix AI – Build the AI Future for Revenue Teams, Globally! #LeadBackendDeveloper #PythonEngineer #MicroservicesArchitecture #BackendEngineering #FastAPI #DarwixAI #AIStartup #TechCareers

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨‍💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑‍💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.

Posted 1 week ago

Apply

3.0 - 4.5 years

0 Lacs

India

On-site

Key Responsibilities Design and build LLM guardrails for prompt injection protection, toxicity/bias detection, and hallucination/jailbreak identification. Build and maintain evaluation frameworks to monitor LLM safety, fairness, and compliance. Develop automated pipelines to process, tag, and evaluate LLM outputs using Python and SQL. Leverage vector databases and embeddings to detect unsafe content or model drift. Create internal dashboards and visualizations (Streamlit, Dash, or lightweight React/JS) for POCs and internal tools. Collaborate with ML engineers and product teams to integrate LLM safety components into production APIs or applications. Stay current with AI safety research, emerging tools (Ragas, LangChain, Guardrails.ai), and regulatory standards (EU AI Act, NIST AI RMF). Required Qualifications 3 - 4.5 years of experience in data science, applied ML, or LLM-based applications. Strong programming skills in Python and experience writing SQL for data exploration or feature engineering. Solid understanding of NLP, deep learning (CNN/RNN/Transformers), and LLM architectures. Hands-on experience with Hugging Face, LangChain, LLM APIs (OpenAI, Anthropic), and vector stores (FAISS, Pinecone, Chroma). Familiarity with front-end basics (Streamlit, Dash, or simple HTML/CSS/JS) is a plus , not mandatory. Experience with model evaluation, red‑teaming, or safety interventions in NLP/LLM systems. (Preferred) Familiarity with deploying ML pipelines in production (Docker, FastAPI) and ability to thrive in a fast‑paced startup environment.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚀 Job Title: Engineering Lead Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 5–10 Years Compensation: Competitive + Performance-based incentives + Meaningful ESOPs 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, building the future of enterprise revenue intelligence. We offer a GenAI-powered conversational intelligence and real-time agent assist suite that transforms how large sales teams interact, close deals, and scale operations. We’re already live with enterprise clients across India, the UAE, and Southeast Asia , and our platform enables multilingual speech-to-text, AI-driven nudges, and contextual conversation coaching—backed by our proprietary LLMs and cutting-edge voice infrastructure. With backing from top-tier VCs and over 30 angel investors, we’re now hiring an Engineering Lead who can architect, own, and scale the core engineering stack as we prepare for 10x growth. 🌟 Role Overview As the Engineering Lead at Darwix AI , you’ll take ownership of our platform architecture, product delivery, and engineering quality across the board. You’ll work closely with the founders, product managers, and the AI team to convert fast-moving product ideas into scalable features. You will: Lead backend and full-stack engineers across microservices, APIs, and real-time pipelines Architect scalable systems for AI/LLM deployments Drive code quality, maintainability, and engineering velocity This is a hands-on, player-coach role —perfect for someone who loves building but is also excited about mentoring and growing a technical team. 🎯 Key Responsibilities🛠️ Technical Leadership Own technical architecture across backend, frontend, and DevOps stacks Translate product roadmaps into high-performance, production-ready systems Drive high-quality code reviews, testing practices, and performance optimization Make critical system-level decisions around scalability, security, and reliability 🚀 Feature Delivery Work with the product and AI teams to build new features around speech recognition, diarization, real-time coaching, and analytics dashboards Build and maintain backend services for data ingestion, processing, and retrieval from Vector DBs, MySQL, and MongoDB Create clean, reusable APIs (REST & WebSocket) that power our web-based agent dashboards 🧱 System Architecture Refactor monoliths into microservice-based architecture Optimize real-time data pipelines with Redis, Kafka, and async queues Implement serverless modules using AWS Lambda, Docker containers, and CI/CD pipelines 🧑‍🏫 Mentorship & Team Building Lead a growing team of engineers—guide on architecture, code design, and performance tuning Foster a culture of ownership, documentation, and continuous learning Mentor junior developers, review PRs, and set up internal coding best practices 🔄 Collaboration Act as the key technical liaison between Product, Design, AI/ML, and DevOps teams Work directly with founders on roadmap planning, delivery tracking, and go-live readiness Contribute actively to investor tech discussions, client onboarding, and stakeholder calls ⚙️ Our Tech Stack Languages: Python (FastAPI, Django), PHP (legacy support), JavaScript, TypeScript Frontend: HTML, CSS, Bootstrap, Mustache templates; (React.js/Next.js optional) AI/ML Integration: LangChain, Whisper, RAG pipelines, Transformers, Deepgram, OpenAI APIs Databases: MySQL, PostgreSQL, MongoDB, Redis, Pinecone/FAISS (Vector DBs) Cloud & Infra: AWS EC2, S3, Lambda, CloudWatch, Docker, GitHub Actions, Nginx DevOps: Git, Docker, CI/CD pipelines, Jenkins/GitHub Actions, load testing Tools: Jira, Notion, Slack, Postman, Swagger 🧑‍💼 Who You Are 5–10 years of professional experience in backend/full-stack development Proven experience leading engineering projects or mentoring junior devs Comfortable working in high-growth B2B SaaS startups or product-first orgs Deep expertise in one or more backend frameworks (Django, FastAPI, Laravel, Flask) Experience working with AI products or integrating APIs from OpenAI, Deepgram, HuggingFace is a huge plus Strong understanding of system design, DB normalization, caching strategies, and latency optimization Bonus: exposure to working with voice pipelines (STT/ASR), NLP models, or real-time analytics 📌 Qualities We’re Looking For Builder-first mindset – you love launching features fast and scaling them well Execution speed – you move with urgency but don’t break things Hands-on leadership – you guide people by writing code, not just processes Problem-solver – when things break, you own the fix and the root cause Startup hunger – you thrive on chaos, ambiguity, and shipping weekly 🎁 What We Offer High Ownership : Directly shape the product and its architecture from the ground up Startup Velocity : Ship fast, learn fast, and push boundaries Founding Engineer Exposure : Work alongside IIT-IIM-BITS founders with full transparency Compensation : Competitive salary + meaningful equity + performance-based incentives Career Growth : Move into an EM/CTO-level role as the org scales Tech Leadership : Own features end-to-end—from spec to deployment 🧠 Final Note This is not just another engineering role. This is your chance to: Own the entire backend for a GenAI product serving global enterprise clients Lead technical decisions that define our future infrastructure Join the leadership team at a startup that’s shipping faster than anyone else in the category If you're ready to build a product with 10x potential, join a high-output team, and be the reason why the tech doesn’t break at scale , this role is for you. 📩 How to Apply Send your resume to people@darwix.ai with the subject line: “Application – Engineering Lead – [Your Name]” Attach: Your latest CV or LinkedIn profile GitHub/portfolio link (if available) A short note (3–5 lines) on why you're excited about Darwix AI and this role

Posted 1 week ago

Apply

1.0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview At BlueKaktus, we're leveraging cutting-edge cloud technology to transform the $3 trillion Fashion & Lifestyle industry. Own, architect and deliver core modules of our multi-agentic AI SaaS platform - spanning user-facing React micro-front-ends, Python micro-services, Postgres persistence and AWS infrastructure, while mentoring the next wave of engineers in a hyper-growth environment. Key Responsibilities End-to-end ownership: translate product vision into secure, scalable features; drive design, coding, review, testing and deployment. Platform evolution: design fault-tolerant, multi-tenant architectures; weave in multi-agent LLM workflows, vector search and RAG pipelines. Dev-excellence: champion CI/CD, IaC (Terraform/CDK), automated testing, observability and cost-aware cloud operations. Technical leadership: mentor 2-4 engineers, set coding standards, lead architecture reviews and sprint planning. Cross-functional collaboration: pair with Product, DevRel and GTM to ship business-impacting releases every 2-3 weeks. Must-Have Skills 4-6 yrs building production SaaS; 3 yrs in Python back-ends (FastAPI/Django/Flask) and React (hooks, TS). Deep SQL & Postgres tuning; distributed systems know-how (caching, queues, event-driven design). Hands-on AWS (EKS/Lambda, S3, Aurora, IAM) and containerisation (Docker, Kubernetes). Proven track record of shipping at >1 M MAU or >10K TPS scale. Strong DSA, design patterns, code review and mentoring chops. Nice-to-Haves LangChain / Agents / Vector DBs, OpenAI/Anthropic/LLama APIs. Experience with feature-flag systems, multi-region deployments, SOC-2 / ISO-27001 compliance. Apply now at recruitment@bluekaktus.com and join us in transforming fashion with technology!

Posted 1 week ago

Apply

5.0 years

5 - 9 Lacs

Calicut

On-site

We are excited to share a fantastic opportunity for the AI Lead/Sr. AI-ML Engineer position at Gritstone Technologies . We believe your skills and experience could be a perfect match for this role, and we would love for you to explore this opportunity with us. Design and implement scalable, high-performance AI/ML architectures with Python tailored for real-time and batch processing use cases. Lead the development of robust, end-to-end AI pipelines, including advanced data preprocessing, feature engineering, model development, and deployment. Define and drive the integration of AI solutions across cloud-native platforms (AWS, Azure, GCP) with optimized cost-performance trade-offs. Architect and deploy multimodal AI systems, leveraging advanced NLP (e.g., LLMs, OpenAI-based customizations, scanned invoice data extraction), computer vision (e.g., inpainting, super-resolution scaling, video-based avatar generation), and generative AI technologies (e.g., video and audio generation). Integrate domain-specific AI solutions, such as reinforcement learning, and self-supervised learning models. Implement distributed training and inferencing pipelines using state-of-the-art frameworks. Drive model optimization through quantization, pruning, sparsity techniques, and mixed-precision training to maximize performance across GPU hardware. Develop scalable solutions using large vision-language models (VLMs) and large language models (LLMs). Define and implement MLOps practices for version control, CI/CD pipelines, and automated model deployment using tools like Kubernetes, Docker, Kubeflow, and FastAPI. Enable seamless integration of databases (SQL Server, MongoDB, NoSQL) with AI workflows. Drive cutting-edge research in AI/ML, including advancements in RLHF, retrieval-augmented generation (RAG), and multimodal knowledge graphs. Experiment with emerging generative technologies, such as diffusion models for video generation and neural audio synthesis. Collaborate with cross-functional stakeholders to deliver AI-driven business solutions aligned with organizational goals. null 5+ years of Experience

Posted 1 week ago

Apply

6.0 - 9.0 years

5 - 8 Lacs

Gurgaon

On-site

Description The Role: Integrate with internal/external LLM APIs (e.g., OpenAI, Azure OpenAI), including prompt engineering and pre/post-processing as required. Build and maintain data analysis workflows using Pandas for data transformation and insight delivery. Develop RESTful APIs using FastAPI or Flask for data and document management. Design and implement clean, efficient, and modular Python codebases for backend services, data pipelines, and document processing workflows. Support the team in onboarding new data sources, integrating with Azure services, and ensuring smooth cloud deployments. Collaborate with product, data science, and engineering teams to translate business requirements into technical solutions. Write unit tests and contribute to CI/CD pipelines for robust, production-ready code. Stay up to date with advances in Python, LLM, and cloud technologies. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 6 to 9 years of hands-on experience in data engineering or backend development with Python. Technical Competencies: Exposure to LLM integration (prompt design, API integration, handling text data). Strong experience in Python with focus on data analysis (Pandas) and scripting. Hands-on experience in building REST APIs (FastAPI or Flask). Experience in developing data pipelines, data cleaning, and transformation. Working knowledge of Azure cloud services (Azure Functions, Blob Storage, App Service, etc.) (Nice to have) Experience integrating MongoDB with Python for data storage, modelling, or reporting.

Posted 1 week ago

Apply

0.0 years

2 - 9 Lacs

Gurgaon

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking a talented and passionate MLOps Engineer to join our growing team. In this role, you will be responsible for building Python and Spark-based ML solutions that ensure the reliability and efficiency of our machine learning systems in production. You will collaborate closely with data scientists to operationalize existing models and optimize our ML workflows. Your expertise in Python, Spark, model inferencing, and AWS services will be crucial in driving our data-driven initiatives. What you’ll do: Develop ML inferencing and data pipelines with AWS tools (S3, EMR, Glue, Athena). Python API development using Frameworks like FASTAPI and DJANGO Deploy and optimize ML models on SageMaker and EKS Implement IaC with Terraform and CI/CD for seamless deployments. Ensure quality, scalability and performance of API’s. Collaborate with product manager, data scientists and other engineers for smooth operations. Communicate technical insights clearly and support production troubleshooting when needed. What you’ll need: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Must have: 0-2 years of experience building data and MLOPS pipelines using Python and Spark. Strong proficiency in Python. Exposure to Spark is good to have. Hands-on experience Restful development using Python frameworks like Fast API and Django Experience with Docker and Kubernetes (EKS) or Sagemaker. Experience with CloudFormation or Terraform for deploying and managing AWS resources. Strong problem-solving and analytical skills. Ability to work effectively within a agile environment. Who you are: Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for successful delivery of the solutions. Strong desire to improve upon their skills in software testing and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-AJ4 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101728 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

5.0 - 7.0 years

6 - 8 Lacs

Chennai

On-site

Senior Consultant - Data Scientist Date: Jul 25, 2025 Location: Chennai, IN Requisition ID: 14612 Description: About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Position Summary: As a Data Scientist at FSL, you will leverage your expertise in Machine Learning, Deep Learning, Computer Vision, Natural Language Processing and Generative AI to develop innovative data-driven solutions and applications. You will play a key role in designing and deploying dynamic models and applications using modern web frameworks like Flask and FastAPI, ensuring efficient deployment and ongoing monitoring of these systems. Job Title: Sr. Consultant- Data Scientist Key Responsibilities: Model Development and Application: Design and implement advanced ML and DL models. Develop web applications for model deployment using Flask and FastAPI to enable real-time data processing and user interaction. Data Analysis: Perform exploratory data analysis to understand underlying patterns, correlations and trends. Develop comprehensive data processing pipelines to prepare large datasets for analysis and modeling. Generative AI: Employ Generative AI techniques to create new data points, enhance content generation and innovate within the field of synthetic data production. Collaborative Development: Work with cross-functional teams to integrate AI capabilities into products and systems. Ensure that all AI solutions are aligned with business goals and user needs. Research and Innovation: Stay updated with the latest developments in AI, ML, DL, CV and NLP. Explore new technologies and methodologies that can impact our products and services positively. Communication: Effectively communicate complex quantitative analysis in a clear, precise and actionable manner to senior management and other departments Required Skills and Qualifications: Education: BE or Master’s or PhD in Computer Science, Data Science, Statistics or a related field. Experience: 5 to 7 years of relevant experience in a data science role with a strong focus on ML, DL and statistical modeling. Technical Skills: Strong coding skills in Python, including experience with Flask or FastAPI. Proficiency in ML, DL frameworks (e.g., PyTorch, TensorFlow), CV (e.g., OpenCV) and NLP libraries (e.g., NLTK, spaCy). Generative AI: Experience with generative models such as GANs, VAEs or Transformers. Deployment Skills: Experience with Docker, Kubernetes and continuous integration/continuous deployment (CI/CD) pipelines. Strong Analytical Skills: Ability to translate complex data into actionable insights. Communication: Excellent written and verbal communication skills Certifications: Certifications in Data Science, ML or AI from recognized institutions is added advantage. ️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do As an AI Engineer at Wednesday, you’ll design and build production-ready AI systems using state-of-the-art language models, vector databases, and modern AI frameworks. You’ll own the full lifecycle of AI features — from prototyping and prompt engineering to deployment, monitoring, and optimization. You’ll work closely with product and engineering teams to ensure our AI solutions deliver real business value at scale. Your Responsibilities System Architecture & Development Architect and implement AI applications leveraging transformer-based LLMs, embeddings, and vector similarity techniques. Build modular, maintainable codebases using Python and AI frameworks. Retrieval-Augmented Generation & Semantic Search Design and deploy RAG systems with vector databases such as Pinecone, Weaviate, or Chroma to power intelligent document search and knowledge retrieval. LLM Integration & Optimization Integrate with LLM platforms (OpenAI, Anthropic) or self-hosted models (Llama, Mistral), including prompt engineering, fine-tuning, and model evaluation. Experience with AI orchestration tools (LangFlow, Flowise), multimodal models, or AI safety and evaluation frameworks. AI Infrastructure & Observability Develop scalable AI pipelines with proper monitoring, evaluation metrics, and observability to ensure reliability in production environments. End-to-End Integration & Rapid Prototyping Connect AI backend to user-facing applications; prototype new AI features quickly using frontend frameworks (React/Next.js). Cross-Functional Collaboration Partner with product managers, designers, and fellow engineers to translate complex business requirements into robust AI solutions. Requirements Have 3–5 years of experience building production AI/ML systems at a consulting or product-engineering firm. Possess deep understanding of transformer architectures, vector embeddings, and semantic search. Are hands-on with vector databases (Pinecone, Weaviate, Chroma) and RAG pipelines. Have integrated and optimized LLMs via APIs or local deployment. Are proficient in Python AI stacks (LangChain, LlamaIndex, Hugging Face). Have built backend services (FastAPI, Node.js, or Go) to power AI features. Understand AI UX patterns (chat interfaces, streaming responses, loading states, error handling). Can deploy and orchestrate AI systems on AWS, GCP, or Azure with containerization and orchestration tools. Bonus: Advanced React/Next.js skills for prototyping AI-driven UIs. Benefits Creative Freedom: A culture that empowers you to innovate and take bold product decisions for client projects. Comprehensive Healthcare: Extensive health coverage for you and your family. Tailored Growth Plans: Personalized professional development programs to help you achieve your career aspirations

Posted 1 week ago

Apply

10.0 years

8 - 10 Lacs

Gāndhīnagar

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Data, Analytics & Insights Technology (DAIT) provides customer, client, and operational data in support of Consumer, Business, Wealth, and Payments Technology with responsibility for a number of key data technologies. These include 16 Authorized Data Sources (ADS), marketing and insights platforms, advanced analytics Platforms, core client data and more. DAIT drives these capabilities with the goal of maximizing data assets to serve bank operations, meet regulatory requirements and personalize interactions with our customers across all channels. GBDART , a sub-function of DAIT, is the Bank’s strategic initiative to modernize data architecture and enable cloud-based, connected data experiences for analytics and insights across commercial banking. It delivers real-time operational integration, a comprehensive data management and regulatory framework, and technology solutions supporting key programs such as 14Q, CECL/CCAR/IFRS9, Flood, Climate Risk, and more. GBDART provides vision and oversight for data digitization, quality excellence using AI/ML/NLP, and process simplification, ensuring a single version of truth for enterprise risk and controls through Authorized Data Sources across all major lines of business. Job Description* This role provides leadership, technical direction, and oversight to a team delivering technology solutions. Key responsibilities include overseeing the design, implementation, and maintenance of complex applications, aligning technical solutions with business objectives, and ensuring coding practices and quality comply with software development standards. The position requires managing multiple software implementations and demonstrating expertise across several technical competencies. Responsibilities* 10+ years of team leadership experience with a strategic mindset, preferably in Agile/Scrum environments. Strong hands-on development experience in React, JavaScript, HTML, and CSS. Experience in enterprise-level architecture and solution-based development. Architectural design and design thinking skills (desirable). Experience with OpenShift containers: creating, deploying images, configuring services/routes, persistent volumes, and secret management. Configuring reverse proxy, whitelisting services, forwarding headers, and managing CORS. Experience with OAuth or SSO. Python FastAPI development and database proficiency. Stakeholder management, primarily with US-based leadership and teams. Requirements* Education* B.E. / B.Tech / M.E. / M.Tech / M.C.ABE / MCA. Certifications (preferred): React, Python with SQL. Experience Range* 06 Years To 12 Years. Foundational Skills* In-depth knowledge of the Systems Development Life Cycle (SDLC). Proficient in Windows and Linux systems. Knowledge of database systems (MySQL or any RDBMS). Systems engineering and deployment experience. Strong problem-solving skills with the ability to minimize risk and negative impact. Ability to work independently with minimal oversight. Motivated and eager to learn. Broad knowledge of information security principles, including identity, access, and authorization. Strong analytical and conceptual thinking skills. Desired Skills* Effective communication across a wide range of technical audiences. Comfortable with CI/CD processes and tools (e.g., Ansible, Jenkins, JIRA). Work Timings* 11:30AM - 8:30PM (IST). Job Location* Chennai, GIFT.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Noida

On-site

Lead Assistant Manager EXL/LAM/1433397 Digital EmergingNoida Posted On 25 Jul 2025 End Date 08 Sep 2025 Required Experience 6 - 10 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code G090622 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 1000000.0000 - 1500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Emerging Business Unit Organization Digital Emerging LOB Digital Delivery Practice SBU Digital Finance Suite Country India City Noida Center Noida - Centre 59 Skills Skill AI ML Minimum Qualification B.TECH/B.E MCA BCA Certification No data available Job Description Job Summary: We are seeking a highly experienced AI/ML Engineer with 7–10 years of hands-on experience in building intelligent systems using Generative AI, Agentic AI, RAG (Retrieval-Augmented Generation), NLP, and Deep Learning. The ideal candidate will play a critical role in designing, developing, and deploying cutting-edge AI applications using Langchain, FastAPI, and other modern AI frameworks, with a strong programming foundation in Python and SQL. Key Responsibilities: Design and develop scalable Agentic AI and Generative AI-based applications and pipelines. Implement Retrieval-Augmented Generation (RAG) architectures to enhance LLM performance with dynamic knowledge integration. Fine-tune and deploy NLP and deep learning models to solve real-world business problems. Build autonomous agents that can execute goal-oriented tasks independently. Develop robust APIs using FastAPI and integrate with Langchain workflows. Work in a Linux environment for model development and production deployment. Collaborate with cross-functional teams to drive ML product delivery end-to-end. Write optimized code in Python, manage datasets and queries using SQL. Keep pace with rapid advancements in the AI/ML space and propose innovations. Required Skills & Qualifications: 7–10 years of experience in AI/ML system design and deployment. Strong expertise in Agentic AI, Generative AI, RAG, NLP, and Deep Learning. Solid programming skills in Python and SQL. Proficient in Langchain, FastAPI, and working in Linux environments. Experience in building and scaling ML pipelines, and deploying models into production. Knowledge of vector databases, prompt engineering, and LLM orchestration is a plus. Strong analytical and problem-solving skills, with the ability to work in Agile teams. Workflow Workflow Type Digital Solution Center

Posted 1 week ago

Apply

0.0 - 1.0 years

1 - 1 Lacs

Hyderabad

Work from Office

Responsibilities: * Develop scalable web apps with Django Rest Framework & FastAPI * Implement REST APIs using Python & AWS Lambda * Collaborate on CI/CD pipelines with DevOps mindset

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Schaeffler is a dynamic global technology company and its success has been a result of its entrepreneurial spirit and long history of private ownership. Does that sound interesting to you? As a partner to all of the major automobile manufacturers, as well as key players in the aerospace and industrial sectors, we offer you many development opportunities. Your Key Responsibilities Conception, development, test automation, and operation of Generative AI featured applications like Chatbots as well as internal sample and template repositories Conception, implementation, testing, and monitoring of Generative AI processes, e. g. RAG pipelines, using the latest cloud technologies and Python Working in interdisciplinary teams using agile methods as well as supporting internal product teams Taking on the technical responsibility of applications throughout the life cycle as part of an DevOps tea Your Qualifications University degree in (Applied)Computer Science/Software Engineering, or akin qualification Professional experience as a Python developer in the context of Generative AI projects Proficiency in building scalable REST APIs using Python, with experience in common frameworks, like FastAPI and SqlAlchemy, preferred Familiarity with data processing and transformation techniques to support AI models and workflows, e. g. SQL, ETL, RAG Know-how in working with cloud services (ideally Azure and AWS) is plus Ability to learn new technologies and skills (T-Shape philosophy) Excellent customer service skills and ability to interact with colleagues across the organization influenced by diNerent working cultures (primarily German) Experience in agile environments (i.e. flat hierarchies in daily work), combined with profound agile mindset and corresponding experience in daily work Fluent (i.e. B2 or higher) in English spoken and written As a global company with employees around the world, it is important to us that we treat each other with respect and value all ideas and perspectives. By appreciating our differences, we inspire creativity and drive innovation. In this way, we contribute to sustainable value creation for our stakeholders and society as a whole. Together, we advance how the world moves. Exciting assignments and outstanding development opportunities await you because we impact the future with innovation. We look forward to your application. www.schaeffler.com/careers Your Contact Schaeffler Technology Solutions India Pvt. Ltd. Vineet Panvelkar For technical questions, please contact this email address: technical-recruiting-support-AP@schaeffler.com Keywords: Experienced; Engineer; Full-Time; Unlimited; Digitalization & Information Technology;

Posted 1 week ago

Apply

5.0 years

30 - 32 Lacs

Greater Hyderabad Area

On-site

Experience : 5.00 + years Salary : INR 3000000-3200000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Hyderabad) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: InfraCloud Technologies Pvt Ltd) (*Note: This is a requirement for one of Uplers' client - IF) What do you need for this opportunity? Must have skills required: Banking, Fintech, Product Engineering background, Python, FastAPI, Django, Machine learning (ML) IF is Looking for: Product Engineer Location: Narsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About The Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role And Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

General Summary: The Senior AI Engineer (2–5 years' experience) is responsible for designing and implementing intelligent, scalable AI solutions with a focus on Retrieval-Augmented Generation (RAG), Agentic AI, and Modular Cognitive Processes (MCP). This role is ideal for individuals who are passionate about the latest AI advancements and eager to apply them in real-world applications. The engineer will collaborate with cross-functional teams to deliver high-quality, production-ready AI systems aligned with business goals and technical standards Essential Duties & Responsibilities: Design, develop, and deploy AI-driven applications using RAG and Agentic AI frameworks. Build and maintain scalable data pipelines and services to support AI workflows. Implement RESTful APIs using Python frameworks (e.g., FastAPI, Flask) for AI model integration. Collaborate with product and engineering teams to translate business needs into AI solutions. Debug and optimize AI systems across the stack to ensure performance and reliability. Stay current with emerging AI tools, libraries, and research, and integrate them into projects. Contribute to the development of internal AI standards, reusable components, and best practices. Apply MCP principles to design modular, intelligent agents capable of autonomous decision-making. Work with vector databases, embeddings, and LLMs (e.g., GPT-4, Claude, Mistral) for intelligent retrieval and reasoning. Participate in code reviews, testing, and validation of AI components using frameworks like pytest or unittest. Document technical designs, workflows, and research findings for internal knowledge sharing. Adapt quickly to evolving technologies and business requirements in a fast-paced environment. Knowledge, Skills, and/or Abilities Required: 2–5 years of experience in AI/ML engineering, with at least 2 years in RAG and Agentic AI. Strong Python programming skills with a solid foundation in OOP and software engineering principles. Hands-on experience with AI frameworks such as LangChain, LlamaIndex, Haystack, or Hugging Face. Familiarity with MCP (Modular Cognitive Processes) and their application in agent-based systems. Experience with REST API development and deployment. Proficiency in CI/CD tools and workflows (e.g., Git, Docker, Jenkins, Airflow). Exposure to cloud platforms (AWS, Azure, or GCP) and services like S3, SageMaker, or Vertex AI. Understanding of vector databases (e.g., OpenSearch, Pinecone, Weaviate) and embedding techniques. Strong problem-solving skills and ability to work independently or in a team. Interest in exploring and implementing cutting-edge AI tools and technologies. Experience with SQL/NoSQL databases and data manipulation. Ability to communicate technical concepts clearly to both technical and non-technical audiences. Educational/Vocational/Previous Experience Recommendations: Bachelor/ Master degree or related field. 2+ years of relevant experience Working Conditions: Hybrid - Pune Location

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies