Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
15 - 20 Lacs
Jaipur, Rajasthan, India
On-site
We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision
Posted 4 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: • Develop and implement machine learning models and algorithms. • Work closely with project stakeholders to understand requirements and translate them into deliverables. • Utilize statistical and machine learning techniques to analyze and interpret complex data sets. • Stay updated with the latest advancements in AI/ML technologies and methodologies. • Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: • Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. • Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: • Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python • Strong knowledge and experience in Generative AI/ LLM based development. • Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). • Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. • Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. • Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. • Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. • Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. • Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. • Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL • Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). • Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). • Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills • Knowledge and Experience in building knowledge graphs in production. • Understanding of multi-agent systems and their applications in complex problemsolving scenarios
Posted 4 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: AI Agent Platform Engineer Location: Noida Department: Product Engineering – RevAi Pro Role Overview You will be responsible for designing, developing, and scaling the AI Agent Framework that powers automation-first modules in RevAi Pro, such as Tell Me , Action Center , and intelligent AI agents . This role is critical to shaping the core foundation of how automation, enterprise search, and just-in-time execution work inside our platform. Key Responsibilities 1. Agent Architecture & Runtime Architect and implement the core orchestration engine for AI agents (event-driven/task-based) Manage agent lifecycle functions such as spawn, pause, escalate, and terminate Enable secure, real-time communication between agents, services, and workflows Integrate memory and retrieval systems using vector databases like Pinecone, Weaviate, or Qdrant 2. LLM Integration & Prompt Engineering Integrate LLM providers (OpenAI, Azure OpenAI, Anthropic, Mistral, etc.) into agent workflows Create modular prompt templates with retry/fallback mechanisms Implement chaining logic and dynamic tool use for agents using LangChain or LlamaIndex Develop reusable agent types such as Summarizer, Validator, Notifier, Planner, etc. 3. Backend API & Microservices Development Develop FastAPI-based microservices for agent orchestration and skill execution Create APIs to register agents, execute agent actions, and manage runtime memory Implement RBAC, rate limiting, and security protocols for multi-tenant deployments 4. Data Integration & Task Routing Build connectors to integrate structured (CRM, SQL) and unstructured data sources (email, docs, transcripts) Route incoming data streams to relevant agents based on workflow and business rules Support ingestion from tools like Salesforce, HubSpot, Gong, and Zoom 5. DevOps, Monitoring, and Scaling Deploy the agent platform using Docker and Kubernetes on Azure Implement Redis, Celery, or equivalent async task systems for agent task queues Set up observability to monitor agent usage, task success/failure, latency, and hallucination rates Create CI/CD pipelines for agent modules and prompt updates Ideal Candidate Profile 2–4 years of experience in backend engineering, ML engineering, or agent orchestration Strong command over Python (FastAPI, asyncio, Celery, SQLAlchemy) Experience with LangChain, LlamaIndex, Haystack, or other orchestration libraries Hands-on with OpenAI, Anthropic, or similar LLM APIs Comfortable with vector embeddings and semantic search systems Understanding of modern AI agent frameworks like AutoGen, CrewAI, Semantic Planner, or ReAct Familiarity with multi-tenant API security and SaaS architecture Bonus: Frontend collaboration experience to support UI for agents and dashboards Bonus: Familiarity with SaaS platforms in B2B domains like RevOps, CRM, or workflow automation What You’ll Gain Ownership of agent architecture inside a live enterprise-grade AI platform Opportunity to shape the future of AI-first business applications Collaboration with founders, product leaders, and early enterprise customers Competitive salary with potential ESOP First-mover engineering credit on one of the most advanced automation stacks in SaaS
Posted 5 days ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore North, India | Posted on 07/29/2025 Share job via email Share this job with your network Job Information Job Type Full time Date Opened 07/29/2025 Project Code PRJ000 Industry IT Services Work Experience 5- 10 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 About Us
Posted 5 days ago
0.0 - 3.0 years
12 - 24 Lacs
Chennai, Tamil Nadu
On-site
We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person
Posted 5 days ago
4.0 years
0 Lacs
India
Remote
At SpicyChat, we’re on a mission to build the best uncensored roleplaying agent in the world , and we’re looking for a passionate Data Scientist to join our team. Whether you’re early in your data science career or growing into a mid-senior role, this is a unique opportunity to work hands-on with state-of-the-art LLMs in a fast-paced, supportive environment. Role Overview We’re looking for a Data Scientist (Junior to Mid-Senior level) who will support our LLM projects across the full data pipeline—from building clean datasets and dashboards to fine-tuning models and supporting cross-functional collaboration. You’ll work closely with ML engineers, product teams, and data annotation teams to bring AI solutions to life. What You’ll Be Doing ETL and Data Pipeline Development: Design and implement data extraction, transformation, and loading (ETL) pipelines. Work with structured and unstructured data from various sources. Data Preparation: Clean, label, and organize datasets for training and evaluating LLMs. Collaborate with annotation teams to ensure high data quality. Model Fine-Tuning & Evaluation: Support the fine-tuning of LLMs for specific use cases. Assist in model evaluation, prompt engineering, and error analysis. Dashboarding & Reporting: Create and maintain internal dashboards to track data quality, model performance, and annotation progress. Automate reporting workflows to help stakeholders stay informed. Team Coordination & Collaboration: Communicate effectively with ML engineers, product managers, and data annotators. Ensure that data science deliverables align with product and business goals. Research & Learning: Stay current with developments in LLMs, fine-tuning techniques, and the AI ecosystem. Share insights with the team and suggest improvements based on new findings. Qualifications Required: 1–4 years of experience in a data science, ML, or analytics role. Proficient in Python and data science libraries (Pandas, NumPy, scikit-learn). Experience with SQL and data visualization tools (e.g., Streamlit, Dash, Tableau, or similar). Familiarity with machine learning workflows and working with large datasets. Strong communication and organizational skills. Bonus Points For: Experience fine-tuning or evaluating large language models (e.g., OpenAI, Hugging Face, LLaMA, Mistral, etc.). Knowledge of prompt engineering or generative AI techniques. Exposure to tools like Weights & Biases, Airflow, or cloud platforms (AWS, GCP, Azure). Previous work with cross-functional or remote teams. Why Join NextDay AI? 🌍 Remote-first: Work from anywhere in the world. ⏰ Flexible hours: Create a schedule that fits your life. 🌴 Unlimited leave: Take the time you need to rest and recharge. 🚀 Hands-on with LLMs: Get practical experience with cutting-edge AI systems. 🤝 Collaborative culture: Join a supportive, ambitious team working on real-world impact. 🌟 Mission-driven: A chance to be part of an exciting mission and an amazing team. Ready to join us in creating the ultimate uncensored roleplaying agent? Send us your resume along with some details on your coolest projects. We’re excited to see what you’ve been working on!
Posted 5 days ago
0.0 years
0 Lacs
Bengaluru
Work from Office
Role & responsibilities Assist in designing, training, and evaluating machine learning and deep learning models. Work on GenAI use cases such as text summarization, question answering, and prompt engineering. Build applications using LLMs (like OpenAI GPT, LLaMA, Mistral, Claude, or similar). Preprocess and manage large datasets for training and inference. Implement NLP pipelines using libraries like Hugging Face Transformers. Help integrate AI models into production-ready APIs or applications. Stay updated with advancements in GenAI, ML, and LLM frameworks. Preferred candidate profile Knowledge of vector databases (FAISS, Pinecone, etc.) LangChain or LlamaIndex (RAG pipelines) Experience with Kaggle competitions Awareness of ethical AI principles and model limitations Selection Process: Technical Assessment Python + ML/GenAI basics Technical Interview – Coding + Project Discussion Final Selection – Based on combined performance
Posted 5 days ago
5.0 years
0 Lacs
Hyderābād
On-site
Job Title: AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents
Posted 5 days ago
1.0 years
2 - 6 Lacs
Ahmedabad
On-site
About the Role We are looking for a LLM (Large Language Models) Engineer to design, build, and optimize intelligent agents powered by Large Language Models (LLMs). You will work on cutting-edge AI applications , pre-train LLMs, fine-tune open-source models, integrate multi-agent systems, and deploy scalable solutions in production environments. Key Responsibilities – (Must Have) Develop and fine-tune LLM-based modesl and AI agents for automation, reasoning, and decision-making. Build multi-agent systems that coordinate tasks efficiently. Design prompt engineering, retrieval-augmented generation (RAG), and memory architectures . Optimize inference performance and reduce hallucinations in LLMs. Integrate LLMs with APIs, databases, and external tools for real-world applications . Implement reinforcement learning with human feedback (RLHF) and continual learning strategies. Collaborate with research and engineering teams to enhance model capabilities. Requirements 1+ years in AI/ML, with at least 1+ years in LLMs, or AI agents . Strong experience in Python, LangChain, LlamaIndex, Autogen, Hugging Face, etc. Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Hands-on experience in LLM deployments with strong inference capabilities using robust frameworks such as vLLM. building multi-modal RAG systems. Knowledge of vector databases (FAISS, Chroma) for retrieval-based systems. Experience with LLM fine-tuning, downscaling, prompt engineering, and model inference optimization . Familiarity with multi-agent systems, cognitive architectures, or autonomous AI workflows . Expertise in cloud platforms (AWS, GCP, Azure) and scalable AI deployments . Strong problem-solving and debugging skills. Nice to Have Contributions to AI research, GitHub projects, or open-source communities . Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Knowledge of Neural Symbolic AI, AutoGPT, BabyAGI, or similar frameworks . Job Type: Full-time Pay: ₹23,671.07 - ₹55,229.87 per month Benefits: Paid sick time Provident Fund Work Location: In person
Posted 5 days ago
12.0 years
0 Lacs
India
Remote
We're building a next generation agentic AI platform and need a talented, self-starting, experienced Senior developer. The role will require advanced AI manipulation (including prompt engineering) and model tuning. You will be part of a small multi-disciplinary team, solving problems that have never been solved before. Skills And Experience Excellent command of English reading, writing, and speaking Deep experience working with LLMs (OpenAI, Claude, Mistral, emphasis on open-source (smaller) models, etc.) Advanced prompt engineering across zero-, few-shot, and chain-of-thought paradigms Experience with Retrieval-Augmented Generation (RAG) pipelines Experience with model fine-tuning techniques, including LoRA, QLoRA, adapters, etc. Strong understanding of Agentic AI architectures, including multi-agent coordination, tool use, and planning Experience with Mixture of Experts (MoE) models and how to route, train, and serve them efficiently Experience with LLM tooling (LangChain, LlamaIndex, Transformers, etc.) Must be able to work UK office hours Desirable Skills Experience with TypeScript Experience with vector databases, embeddings, and hybrid search Experience with model hosting, inference infrastructure, and cloud-based deployment Experience with open-source LLMs and fine-tuning stacks (Axolotl, vLLM, etc.) Benefits “Yoodoo” - You can go offline and spend 2 hours of your work time every week on any activity that serves your mental or physical health; You will get 25 days paid Annual Leave per annum; Additional paid day off for festivals, your birthday, any celebration; Paid maternity leave; Paid paternity leave; Paid sick leave; Remote working; Union - who provide assistance and service to everyone at VDP Salary - Competitive market rate About VDP 🌟Founded 12 years ago in the UK, our company has a worldwide team that offers technology services. 👔 We help various clients, including those without their own developers. For them, we build entire teams to develop software from the ground up. We also assist clients who already have platforms and development teams but need help speeding up their work or starting new projects. 🛠️ Our skills include a broad range of technologies such as C#, Python, C++, Angular, React, Node.js, and both native and hybrid app development, as well as AI. Since we started, we have always worked remotely, so we don’t have an office, and everyone works from home. 🌲 We are a certified B Corp and have proven we are a force for good for our Employees, our Customers, are suppliers our Communities and the Environment. ❤️ 10% of what we make goes to our VDP Trust, helping to support projects for Women and Girls around the world. Through our work, our values and our actions, we aim to be a force for good, not just for ourselves and our customers, but for our communities and the world at large; We have four values that we all live by. These are Collaborate, Compassion, Honesty and Inclusion;
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Apply Now Job Title AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. 🔧 Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. 🧪 Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents Apply Now
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hi, I’m reaching out regarding a Hybrid Work IT Support Engineer - Level 3 opportunity with one of our companies based in Bengaluru. Karnataka . Please let me know if you’re interested in discussing this further. Thank you! Title: IT Support Engineer - Level 3 Job Type: Full Time Employment Type of Work: Hybrid Work Company Location: Bengaluru. Karnataka Work Hours: 1st Shift & 2nd Shift Description: Majority of the work demand supporting Customer with Production Issues during automation of the Network deployment. Challenging roles of real time debugging of the issue and quick resolution for networks automation Job Responsibilities Actively involved in troubleshooting production defects in customer environment Escalate to next level support based on the priority of the issue. Understand customer demands and communicate effectively on the nature of the issue and possible service impact. Competencies Required: Overall 5+ years of relevant IT experience, in area of Software development and test Automation. Should have end to end working experience in Automation development in cloud native environment. Automation using Python, JavaScript. Awareness of SQL is an additional advantage. Networking knowledge is added advantage. Knowledge of Automation using TOSCA, mistral engine. OpenShift, Docker, Helm and container technology experience is essential Experience troubleshooting cloud-native applications Prior experience working in Linux environment is must.
Posted 5 days ago
4.0 years
0 Lacs
India
Remote
Job Title: Senior Backend & DevOps Engineer (AI-Integrated Products) Location: Remote Employment Type: Full Time/Freelance/Part Time Work Hours: Flexibile Work Timing About Us: We’re building AI-powered products that seamlessly integrate technology into everyday routines. We're now looking for a Senior Backend & DevOps Engineer who can help us scale Mobile Apps globally and architect the backend for it, while owning infrastructure, performance, and AI integrations. Responsibilities: Backend Engineering: Own and scale the backend architecture (Node.js/Express or similar) for Mobile Apps Build robust, well-documented, and performant APIs Implement user management, session handling, usage tracking, and analytics Integrate 3rd-party services including OpenAI, Whisper, and other LLMs Optimize app-server communication and caching for global scale DevOps & Infrastructure: Maintain and scale AWS/GCP infrastructure (EC2, RDS, S3, CloudFront/CDN, etc) Set up CI/CD pipelines (GitHub Actions preferred) for smooth deployment Monitor performance, set up alerts, and handle autoscaling across regions Manage cost-effective global infra scaling and ensure low-latency access Handle security (IAM, secrets management, HTTPS, CORS policies, etc) AI & Model Integration: Integrate LLMs like GPT-4, Mistral, Mixtral, and open-source models Support fine-tuning, inference pipelines, and embeddings Build offline inference support and manage transcription workflows (Whisper, etc) Set up and optimize vector DBs like Qdrant, Weaviate, Pinecone Requirements: 4+ years of backend experience with Node.js, Python, or Go 2+ years of DevOps experience with AWS/GCP/Azure, Docker, and CI/CD Experience deploying and managing AI/ML pipelines, especially LLMs and Whisper Familiarity with vector databases, embeddings, and offline inference Strong understanding of performance optimization, scalability, and observability Clear communication skills and a proactive mindset Bonus Skills: Experience working on mobile-first apps (React Native backend knowledge is a plus) Familiarity with Firebase, Vercel, Railway, or similar platforms Knowledge of data privacy, GDPR, and offline sync strategies Past work on productivity, journaling, or health/fitness apps Experience self-hosting LLMs or optimizing AI pipelines on edge/cloud Please share(Optional): A brief intro about you and your experience Links to your GitHub/portfolio or relevant projects Resume or LinkedIn profile Any AI/infra-heavy work you’re particularly proud of Contact : subham@binaryvlue.com
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role you will be Design, develop, and maintain scalable backend services using Python and frameworks like Django, Flask, or Fast API Build responsive and interactive UIs using React.js, Vue.js, or Angular. Develop and consume RESTful APIs, and contribute to API contract definitions, including Gen AI/Open AI integration where applicable. Collaborate closely with UI/UX designers, product managers, and fellow engineers to translate business requirements into technical solutions. Ensure performance, security, and responsiveness of web applications across platforms and devices. Write clean, modular, and testable code following industry best practices and participate in code reviews. Architect, build, and maintain distributed systems and microservices, ensuring maintainability and scalability. Implement and manage CI/CD pipelines using tools such as Docker, Kubernetes (HELM), Jenkins, or Ansible. Use observability tools such as Grafana and Prometheus tools to monitor application performance and troubleshoot production issues. Proficient in RAG (Retrieval-Augmented Generation) techniques with hands-on experience in benchmarking models, selecting the most suitable model for specific use cases, and working with LLM (Large Language Model) agents. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in full-stack development. Strong proficiency in Python, with hands-on experience using Django, Flask, or FastAPI. Solid front-end development skills in HTML5, CSS3, and JavaScript, with working knowledge of frameworks like React, Vue, or Angular. Proven experience designing and implementing RESTful APIs and integrating third-party APIs/services. Experience working with Kubernetes, Docker, Jenkins, and Ansible for containerization and deployment. Familiarity with both SQL and NoSQL databases, such as PostgreSQL, MySQL, or MongoDB. Comfortable with unit testing, debugging, and using logging tools for observability. Experience with monitoring tools such as Grafana and Prometheus utilities. Proven experience with OpenAI (GPT-4/GPT-3.5), Claude, Gemini, Mistral, or other commercial/open-source LLMs. Basic experience in data handling, including managing, processing, and integrating data within full-stack applications to ensure seamless backend and frontend functionality You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title:Senior AI/ML Engineer Experience: 8+ Years Location:Bangalore (On-site) Mandatory Skills: LLM & Prompt Engineering: Strong expertise in Prompt Engineering techniques and strategies In-depth understanding of how Large Language Models (LLMs) work, including control over hyperparameters such as temperature, top_p, etc. Experience in LLM fine-tuning , prompt optimization , and zero-/few-shot learning Agentic AI Frameworks: Hands-on experience with agent-based frameworks like: LangChain LangGraph Crew AI Retrieval-Augmented Generation (RAG): Implementation experience in RAG pipelines Experience in integrating vector databases with LLMs for contextual augmentation LLM Evaluation & Observability: Knowledge of techniques and tools for LLM performance evaluation Familiarity with LLM observability platforms and metrics monitoring Ability to define and track quality benchmarks like accuracy, coherence, hallucination rate, etc. Programming & Deployment: Strong programming skills in Python Experience deploying AI/ML models or LLM-based systems on at least one major cloud platform (AWS, GCP, or Azure) Preferred Skills (Nice to Have): Experience working with OpenAI, Anthropic, Cohere , or open-source LLMs (LLaMA, Mistral, Falcon, etc.) Knowledge of Docker , Kubernetes , MLflow , or other ML Ops tools Experience with embedding models , vector databases (like Pinecone, FAISS, Weaviate) Familiarity with transformer architecture and fine-tuning techniques Responsibilities: Design, build, and optimize LLM-powered applications Develop and maintain prompt strategies tailored to business use-cases Architect and implement agentic AI workflows using modern frameworks Build and monitor RAG pipelines for improved information retrieval Establish processes for evaluating and monitoring LLM behavior in production Collaborate with cross-functional teams including Product, Data, and DevOps Ensure scalable and secure deployment of models to production Soft Skills: Strong problem-solving and analytical thinking Excellent communication and documentation skills Passion for staying updated with advancements in GenAI and LLM ecosystems
Posted 6 days ago
5.0 years
0 Lacs
India
Remote
About the Role We are seeking a hands-on AI/ML Engineer with deep expertise in Retrieval-Augmented Generation (RAG) agents , Small Language Model (SLM) fine-tuning , and custom dataset workflows . You'll work closely with our AI research and product teams to build production-grade models, deploy APIs, and enable next-gen AI-powered experiences. Key Responsibilities Design and build RAG-based solutions using vector databases and semantic search. Fine-tune open-source SLMs (e.g., Mistral, LLaMA, Phi, etc.) on custom datasets. Develop robust training and evaluation pipelines with reproducibility. Create and expose REST APIs for model inference using FastAPI . Build lightweight frontends or internal demos with Streamlit for rapid validation. Analyze model performance and iterate quickly on experiments. Document processes and contribute to knowledge-sharing within the team. Must-Have Skills 3–5 years of experience in applied ML/AI engineering roles. Expert in Python and common AI frameworks (Transformers, PyTorch/TensorFlow). Deep understanding of RAG architecture, vector stores (FAISS, Pinecone, Weaviate). Experience with fine-tuning transformer models and instruction-tuned SLMs. Proficient with FastAPI for backend API deployment and Streamlit for prototyping. Knowledge of tokenization, embeddings, training loops, and evaluation metrics. Nice to Have Familiarity with LangChain, Hugging Face ecosystem, and OpenAI APIs. Experience with Docker, GitHub Actions, and cloud model deployment (AWS/GCP/Azure). Exposure to experiment tracking tools like MLFlow, Weights & Biases. What We Offer Build core tech for next-gen AI products with real-world impact. Autonomy and ownership in shaping AI components from research to production. Competitive salary, flexible remote work policy, and a growth-driven environment.
Posted 6 days ago
6.0 years
0 Lacs
Bangalore North Rural, Karnataka, India
On-site
Position title: Data Science & AI Engineer Experience:6-8 years Budget:25-28 LPA Notice period : 1 Month -Immediate joiner Skills set : MLOPS,Gen AI,SLM,LLM,Python , Data science , ai engineer Roles and responsibilities: Develop, implement, and optimize machine learning models and AI algorithms to solve complex business problems. Design, build, and fine-tune AI models, particularly focusing on LLMs and SLMs, using state-of-the-art techniques and architectures Apply advanced techniques in prompt engineering, model fine-tuning, and optimization to tailor models for specific business needs. Deploy and manage machine learning models and pipelines on cloud platforms (AWS, GCP, Azure, etc.). Work closely with clients to understand their data and AI needs and provide tailored solutions. Collaborate with cross-functional teams to integrate AI solutions into broader software architectures. Mentor junior team members and provide guidance in implementing best practices in data science and AI development. Stay up-to-date with the latest trends and advancements in data science, AI, and cloud technologies. Requirement: 5+ years of experience in data science, machine learning, and AI technologies. Proven experience working with cloud platforms such as Google Cloud, Microsoft Azure, or AWS. Expertise in programming languages such as Python, R, Julia, and AI frameworks like TensorFlow, PyTorch, Scikit-learn, Hugging face Transformers. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Tableau) Solid understanding of data engineering concepts including ETL, data pipelines, and databases (SQL, NoSQL). Experience with MLOps practices and deployment of models in production environments. Familiarity with NLP (Natural Language Processing) tasks and working with large-scale datasets. Hands-on experience with generative AI models like GPT, Gemini, Claude, Mistral etc. Client-facing experience with strong communication skills to manage and engage stakeholders. Strong problem-solving skills and analytical mindset. Ability to work independently and as part of a team and mentor and provide technical leadership to junior team members
Posted 6 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior AIML Engineer Relevant Experience: 8+ years Location: Bangalore/Pune Employment Type: Full-time Role Overview: We are looking for a senior Generative AI Engineer is responsible for designing, developing, and implementing advanced AI models and solutions, particularly focusing on generative AI technologies such as large language models (LLMs), NLP, and computer vision. This role involves collaborating with cross-functional teams to translate business needs into innovative AI-driven products and services, driving AI initiatives, and ensuring seamless integration and deployment of AI systems. Key Responsibilities: Collaborate with executives, product managers, and stakeholders to prioritize AI initiatives Lead the design, development, and deployment of generative AI models and systems. Make architectural decisions and ensure scalability, efficiency, and robustness of AI solutions. Manage the full AI project lifecycle from research and prototyping to production deployment. Mentor and guide junior engineers and contribute to team skill development. Drive innovation by incorporating cutting-edge AI research and best practices. Qualifications: Minimum 8 years of experience in data science, machine learning, and generative AI. Strong foundation in Machine Learning & Deep Learning with expertise in neural networks, optimization techniques and model evaluation. Having led projects with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps. Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models. Preferred Attributes: Entrepreneurial mindset with a hands-on approach to development. Experience in AI solution architecture and optimization techniques. Ability to work in a fast-paced, business facing environment and lead AI initiatives Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 6 days ago
3.0 years
0 Lacs
New Delhi, Delhi, India
Remote
Career Lab Consulting Pvt. Ltd. (CLC) is hiring a Senior Agentic AI Developer to architect and build the world’s first autonomous AI-powered marketing system — replacing an entire marketing department using a multi-agent system. This is a zero-to-one leadership role with massive ownership and deep-tech innovation at its core. Position : Senior Agentic AI Developer Location : Remote (India Preferred | Hybrid Optional) Type : Full-Time | High Ownership Role CTC : ₹12–15 LPA (Fixed) + Performance Bonuses Apply : hr@careerlabconsulting.com WhatsApp HR Mamta: +91-8700827753 Your Mission You will single-handedly design and deploy a cost-efficient, tool-lite, fully autonomous Agentic AI system to run the marketing ops of InternX–AI — India’s flagship Agentic AI Career Accelerator. You’ll lead architecture, development, scaling, and mentor interns , enabling the system to operate without human intervention across: Content Creation CRM & Email Campaigns Paid Ads Management SEO Workflows Social Media Posting Lead Nurturing Analytics & Reporting What You`ll Build (in 90-120 Days) Multi-agent architecture with self-healing, planning, and memory Lightweight, cost-optimized stack (OSS > SaaS) End-to-end autonomous marketing workflows Trained an intern team to support and scale the AI system Interactive dashboards for business teams to control & review outputs Tech Stack You`ll Work • Python (async, FastAPI, LangGraph) • LLMs (OpenAI, Claude, LLaMA, Mistral, etc.) • Agent Frameworks (LangGraph, CrewAI, CAMEL, OpenAgents) • RAG + Vector DBs (FAISS, Chroma, Weaviate) • Prompt chaining, memory modules, toolformer logic • DevOps: Docker, Git, CI/CD • Web scraping, headless browsers, API agents Who We are looking For Zero-to-One Builder : You love uncharted territory Agentic Thinker : Not just prompt engineering — full autonomy loops Intern Mentor : You’re a leader who uplifts junior talent Cost Hacker : You always think ROI, not just API Future CTO DNA : You're thinking 3 years ahead, not 3 weeks Additional Skills • Agent simulations (AutoGPT/CAMEL-style dialogue agents) • Custom vector memory systems • RLHF or small model fine-tuning • Agentic learning from feedback loops Mentorship Duties You’ll lead and upskill a small batch of interns across: • LLM workflows and prompt chains • Python automation and agent-based systems • AI design logic for scalable execution • Real-time marketing agent delivery What You`ll Get Remote-first freedom with async flexibility Ownership of the first-of-its-kind Agentic AI platform Exposure to InternX–AI: India's flagship AI career tech brand Intern + DevOps support to let you focus on building Outcome-oriented culture with innovation at the core How to Apply Send us: • Your resume (or LinkedIn profile link) • GitHub/code samples or project write-ups (Notion-style preferred) • A short note: "An autonomous AI system you've built or want to build" Email: hr@careerlabconsulting.com WhatsApp HR Mamta: +91-8700827753 Subject Line: Senior Agentic AI Developer – Marketing Autopilot Project Not A Traditional Role This is not prompt engineering. This is not marketing automation. This is autonomous business execution — built from scratch. if you're ready to build the future, join us.
Posted 6 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]”
Posted 6 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Founding AI Engineer Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 4 – 8 years of hands‑on AI/ML engineering in production environments Apply: careers@darwix.ai Subject Line: Application – Founding AI Engineer – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform transforming how enterprise revenue and service teams operate. Our products— Transform+ , Sherpa.ai , and Store Intel —deliver multilingual speech‑to‑text, live coaching nudges, behavioural scoring, and computer‑vision insights for clients such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA. Backed by leading investors and built by IIT/IIM/BITS alumni, we are expanding rapidly across India, MENA, and Southeast Asia. Role Overview As the Founding AI Engineer , you will own the design, development, and deployment of Darwix AI’s core machine‑learning and generative‑AI systems from the ground up. You will work directly with the CTO and founders to convert ambitious product ideas into scalable, low‑latency services powering thousands of live customer interactions daily. This is a zero‑to‑one, high‑ownership role that shapes the technical backbone—and the culture—of our AI organisation. Key Responsibilities End‑to‑End Model Build & Deployment Architect, train, and fine‑tune multilingual speech‑to‑text, diarisation, NER, summarisation, and scoring models (Whisper, wav2vec 2.0, transformer‑based NLP). Design RAG pipelines and prompt‑engineering frameworks with commercial and open‑source LLMs (OpenAI, Mistral, Llama 2). Build GPU/CPU‑optimised inference micro‑services in Python/FastAPI with strict latency budgets. Production Engineering Implement asynchronous processing, message queues, caching, and load balancing for high‑concurrency voice and text streams. Establish CI/CD, model versioning, A/B testing, and automated rollback for ML APIs. Data Strategy & Tooling Define data‑collection, labelling, and active‑learning loops; build pipelines for continuous model improvement. Create evaluation harnesses (WER, ROUGE, AUROC, latency) and automate nightly regression tests. Security & Compliance Implement role‑based access, encryption‑at‑rest/in‑transit, and audit logging for all AI endpoints. Ensure adherence to enterprise infosec requirements and regional data‑privacy standards. Cross‑Functional Collaboration Partner with product managers to translate customer pain points into technical requirements and success metrics. Work with backend, DevOps, and frontend teams to expose AI outputs via dashboards, APIs, and real‑time agent assist overlays. Technical Leadership Establish coding standards, documentation templates, and peer‑review culture for the AI team. Mentor junior engineers as the team scales; influence hiring and tech‑stack decisions. Required Skills & Qualifications 4 – 8 years building and deploying ML systems in production (audio, NLP, or LLM focus). Expert‑level Python; strong grasp of PyTorch (or TensorFlow), Hugging Face Transformers, and data‑processing libraries. Proven record of optimising inference pipelines for sub‑second latency at scale. Hands‑on experience with cloud infrastructure (AWS or GCP), Docker/Kubernetes, and CI/CD for ML. Deep understanding of REST/gRPC APIs, security best practices, and high‑availability architectures. Ability to articulate trade‑offs and align technical decisions with business outcomes. Preferred Experience Prior work on Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with vector databases (Pinecone, FAISS), Redis Streams/Kafka, and GPU orchestration (Triton, TorchServe). Exposure to sales‑tech, call‑centre analytics, or real‑time coaching platforms. Contributions to open‑source AI projects or relevant peer‑reviewed publications. Success Metrics (First 12 Months) ≥ 25 % reduction in transcription error rate or latency across core languages. Two net‑new AI modules shipped to production and adopted by Tier‑1 clients. Robust CI/CD and monitoring pipelines in place with < 1 % model downtime. Documentation and onboarding playbooks enabling AI team headcount to double without quality loss. Who You Are A builder who takes ideas from whiteboard to production with minimal supervision. A systems thinker who balances algorithmic innovation with engineering pragmatism. A hands‑on leader who codes, mentors, and sets the technical bar through example. A product‑centric technologist who obsesses over user impact, not benchmark vanity. A lifelong learner who follows the bleeding edge of GenAI and applies it wisely. How to Apply Email your résumé to careers@darwix.ai with the subject line specified above. Optionally, include a brief note detailing an AI system you have designed and deployed, the challenges faced, and the measurable impact achieved. Joining Darwix AI as the Founding AI Engineer means taking ownership of the platform that will redefine how revenue teams worldwide leverage real‑time intelligence. If you are ready to build, scale, and lead at the frontier of GenAI, we look forward to hearing from you.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As an AI Systems Architect / LLM-Native Generalist at SBEK (Sab-Ek), a jewelry brand rooted in service, myth, and light, you will play a crucial role in shaping the integration of storytelling, AI systems, product design, and social impact. Your primary responsibility will be to design, test, and implement advanced AI tools and infrastructure that will enhance our brand and operational efficiency. Your expertise in working with various no-code and low-code platforms, modern LLM tools, and the evolving AI agent ecosystem will be essential for this role. You should possess a proactive approach to experimentation, quick learning abilities, and a track record of reliable execution. Key Responsibilities: - Design and manage internal AI workflows using platforms such as Supabase, Lovable, Cursor, N8n, and Airtable - Evaluate and utilize tools like Lang Chain, GPT-4, Claude, Mistral, Flowise, Vercel AI SDK, and other RAG frameworks - Collaborate with the founder to create prototypes using a combination of no-code solutions, automation, and API logic - Stay updated on the latest LLM tools, open-source projects, and AI-enabled platforms - Develop simple documentation and system maps to ensure team alignment Required Skills & Experience: - Proficiency in various AI-native tools such as Lovable, Cursor, N8n, Supabase, Zapier, Vercel, Firebase, Flowise, Make, and Lang Chain - Understanding of backend logic, user roles, API integrations, and real-time data processing - Ability to implement secure authentication systems and subscription workflows independently - Strong problem-solving capabilities, creativity, and a knack for turning abstract ideas into functional solutions - Familiarity with prompt engineering, vector databases, and the basics of model selection - Willingness to iterate, experiment, and deliver solutions efficiently Bonus Qualifications: - Experience in developing custom GPT models or personal AI assistants - Demonstrated ability to integrate multiple tools or models in innovative ways - Interest in leveraging AI for social good initiatives - Track record of deploying successful applications or automations - Excellent communication skills and the ability to articulate complex concepts clearly This role is ideal for individuals who already possess a deep understanding of AI concepts and technologies and are eager to further explore the field. Please note that we are seeking candidates who are already proficient in AI tools and platforms, as we are not offering training from scratch. If you are actively engaged in exploring AI tools, utilizing platforms like Lovable and Cursor, and envisioning their potential applications in products and businesses, we encourage you to apply for this exciting opportunity at SBEK.,
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough