Jobs
Interviews

895 Summarization Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As our AI Engineer, you’ll own the design, development, and production-grade deployment of our machine learning and NLP pipelines. You’ll work cross-functionally with backend (Java/Spring Boot), data (Kafka/MongoDB/ES), and frontend (React) teams to embed AI capabilities throughout. Responsibilities Build & Deploy ML/NLP Models Design end-to-end ML pipelines for data ingestion, preprocessing, feature engineering, model training, evaluation and monitoring. Train, deploy and operate predictive models (classification, regression, anomaly detection) to drive actionable insights across all MCP sources. Implement NLP components - such as text classification, summarization, and conversational interfaces—to enhance chat-driven workflows and knowledge retrieval. Data Engineering & Integration Ingest, clean, and normalize data from Kafka/Mongo and third-party APIs Define and maintain JSON-schema validations and transformation logic Collaborate with backend services to embed AI outputs Platform & Service Collaboration Work with Java/Spring Boot teams to wrap models as REST endpoints or Kafka stream processors Ensure end-to-end monitoring, logging, and performance tuning within Kubernetes Partner with frontend engineers to surface AI insights in React-based chat interfaces Continuous Improvement Establish A/B testing, metrics, and feedback loops to tune model accuracy and latency Stay on top of LLM and MLops best practices to evolve our AI stack Qualifications Experience: 2–3 years in ML/AI or data science roles, preferably in SaaS. Languages & Frameworks: Python and Familiarity with Java & Spring Boot for service integrations. Data & Infrastructure: Hands-on with Kafka, MongoDB, Redis or similar. Experience containerizing in Docker and deploying on Kubernetes. JSON-path/JSONLogic or similar transformation engines. Soft Skills: Excellent communication - able to translate complex AI concepts to product and customer teams. Nice-to-Haves Experience integrating LLMs or building vector search indexes for semantic retrieval Prior work on chatbots or conversational UIs Familiarity with DevOps stack. (AWS/Azure, k8s, GitOps, Security, Observability and Incident management)

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Why CDM Smith? Check out this video and find out why our team loves to work here! Join Us! CDM Smith – where amazing career journeys unfold. Imagine a place committed to offering an unmatched employee experience. Where you work on projects that are meaningful to you. Where you play an active part in shaping your career journey. Where your co-workers are invested in you and your success. Where you are encouraged and supported to do your very best and given the tools and resources to do so. Where it’s a priority that the company takes good care of you and your family. Our employees are the heart of our company. As an employer of choice, our goal is to provide a challenging, progressive and inclusive work environment which fosters personal leadership, career growth and development for every employee. We value passionate individuals who challenge the norm, deliver world-class solutions and bring diverse perspectives. Join our team, and together we will make a difference and change the world. Job Description AI Model Development: Design, develop, and implement cutting-edge machine learning algorithms and models, focusing on NLP, Generative AI, and other AI technologies. Research and Innovation: Stay up to date with the latest advancements in machine learning and artificial intelligence. Conduct research to identify new techniques and approaches to improve our AI solutions. Data Analysis: Analyze large and complex datasets to extract meaningful insights. Apply statistical analysis and machine learning techniques to gain a deeper understanding of the data. Collaboration: Collaborate with cross-functional teams, including software engineers and data scientists, to integrate machine learning models into applications and systems effectively. Algorithm Optimization: Optimize machine learning algorithms for performance, scalability, and efficiency. Identify and resolve bottlenecks to ensure smooth and fast execution. Testing and Validation: Evaluate the performance of machine learning models using appropriate metrics. Conduct rigorous testing and validation to ensure the accuracy and reliability of the models. Documentation: Document the development process, algorithms, and models. Prepare clear and concise technical documentation for reference and knowledge sharing. Ability to conduct cost/benefit analysis, Business case development etc. Prioritize requirements and create conceptual prototypes and mock-ups. Master strategic business process modeling, traceability, and quality management techniques. Strong verbal and written communication and presentation skills. Experience identifying and communicating analytical outcomes, verbally and in writing, to both business and technical teams. Minimum Qualifications Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc.) Proven Familiarity of Finance & Accounting principles and/or project accounting. Proven experience in developing and implementing machine learning models, particularly in areas like NLP and Generative AI. Proficiency in programming languages such as Python, TensorFlow, PyTorch, or similar frameworks. Strong understanding of deep learning architectures, algorithms, and frameworks. Experience with natural language processing techniques, sentiment analysis, text summarization, and related NLP tasks. Familiarity with generative models such as GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders). Bachelor’s degree in computer science, Information Management/IT, Statistics, Business Administration, or related disciplines. Master’s Degree in Statistics, Business Administration or related disciplines is a plus. Certifications in Data Analytics & Data Science is a plus. Must have 1+ years of experience working on ML algorithms and related products. Must have 1+ years of experience working with relational databases (Oracle), query authoring (SQL). Experience in Business Intelligence tools like Qlik Sense, Tableau, Power BI is a plus. Experience on creating the Dashboards, Scorecards, Ad-hoc Reports. Strong Knowledge in ERP (Oracle EBS: Projects, AR, AP Modules) Amount Of Travel Required 0% Background Check and Drug Testing Information CDM Smith Inc. and its divisions and subsidiaries (hereafter collectively referred to as “CDM Smith”) reserves the right to require background checks including criminal, employment, education, licensure, etc. as well as credit and motor vehicle when applicable for certain positions. In addition, CDM Smith may conduct drug testing for designated positions. Background checks are conducted after an offer of employment has been made in the United States. The timing of when background checks will be conducted on candidates for positions outside the United States will vary based on country statutory law but in no case, will the background check precede an interview. CDM Smith will conduct interviews of qualified individuals prior to requesting a criminal background check, and no job application submitted prior to such interview shall inquire into an applicant's criminal history. If this position is subject to a background check for any convictions related to its responsibilities and requirements, employment will be contingent upon successful completion of a background investigation including criminal history. Criminal history will not automatically disqualify a candidate. In addition, during employment individuals may be required by CDM Smith or a CDM Smith client to successfully complete additional background checks, including motor vehicle record as well as drug testing. Agency Disclaimer All vendors must have a signed CDM Smith Placement Agreement from the CDM Smith Recruitment Center Manager to receive payment for your placement. Verbal or written commitments from any other member of the CDM Smith staff will not be considered binding terms. All unsolicited resumes sent to CDM Smith and any resume submitted to any employee outside of CDM Smith Recruiting Center Team (RCT) will be considered property of CDM Smith. CDM Smith will not be held liable to pay a placement fee. Business Unit COR Group COR Assignment Category Fulltime-Regular Employment Type Regular

Posted 6 days ago

Apply

3.0 years

1 - 2 Lacs

Bengaluru

On-site

JOB DESCRIPTION Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team. As a Applied AI ML Senior Associate at JPMorgan Chase within the Asset and Wealth Management , you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Supports and develops an understanding of key business problems and processes. Advises a model development process, execute tasks including data wrangling/analysis, model training, testing, and selection. Strategically, implement optimization strategies to fine-tune generative models for specific NLP use cases, ensuring high-quality outputs in summarization and text generation. Updates logically and conducts evaluations of generative models (e.g., GPT-4), iterate on model architectures, and implement improvements to enhance overall performance in NLP applications. Implements monitoring mechanisms to track model performance and ensure model reliability. Frequently communicates AI/ML/LLM/GenAI capabilities and results to both technical and non-technical audiences. From data analysis and modeling exercises, generate structured and meaningful insights and present them in an appropriate format according to the audience. Collaboratively, work with other data scientists and machine learning engineers to deploy machine learning solutions. As required by the business stakeholder, model risk function, and other groups, carry out ad-hoc and periodic analysis. Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification in applied AI/ML concepts and 3+ years applied experience Proficiency in programming languages like Python for model development, experimentation, and integration with OpenAI API. Experience with machine learning frameworks, libraries, and APIs, such as TensorFlow, PyTorch, Scikit-learn, and OpenAI API. Experience in building AI/ML models on structured and unstructured data along with model explainability and model monitoring. Solid understanding of fundamentals of statistics, machine learning (e.g., classification, regression, time series, deep learning, reinforcement learning), and generative model architectures, particularly GANs, VAEs. Experience with a broad range of analytical toolkits, such as SQL, Spark, Scikit-Learn, and XGBoost. Experience with graph analytics and neural networks (PyTorch). Excellent problem-solving, communication (verbal and written), and teamwork skills. Preferred qualifications, capabilities, and skills Expertise in building AI/ML models on structured and unstructured data along with model explainability and model monitoring. Expertise in designing and implementing pipelines using Retrieval-Augmented Generation (RAG). Familiarity with the financial services industry. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.

Posted 6 days ago

Apply

1.0 - 2.0 years

2 Lacs

Bharatpur

On-site

✅ Location: Bharatpur (On-Site) ✅ Job Type: Full-Time ✅ Experience: 1–2 Years ✅ Salary: ₹25,000 – ₹35,000/month (based on skills & experience) ✅Job Description: We are looking for a passionate and skilled Python Developer with experience in Machine Learning (ML) and Large Language Models (LLMs) to join our on-site team in Bharatpur . You'll be responsible for developing intelligent tools and solutions using open-source ML/NLP frameworks. ✅Key Responsibilities: Build and deploy ML/LLM-based solutions using Python Work on NLP tasks such as text classification, summarization, and chatbot development Integrate and fine-tune pre-trained LLMs (Hugging Face, OpenAI, etc.) Use libraries like Transformers, LangChain, scikit-learn, or TensorFlow Handle data collection, cleaning, vectorization, and embeddings Create APIs using Flask or FastAPI for ML model deployment Collaborate with the product and engineering teams for real-world use cases ✅ Required Skills: 1–2 years of hands-on experience in Python and ML/NLP Knowledge of LLM frameworks like Hugging Face, LangChain, or OpenAI Experience with vector databases like FAISS or Pinecone (preferred) Familiarity with Transformers, embeddings, tokenization Understanding of REST APIs and integration Good problem-solving and debugging skills ✅Good to Have: Experience with chatbot frameworks or RAG pipelines Exposure to tools like Gradio, Streamlit for UI prototyping Version control using Git Docker/basic deployment skills ✅What We Offer: Competitive salary with performance-based growth Opportunity to work on innovative AI projects locally Learning-focused culture and skill development Supportive and collaborative work environment 5-day work week (Mon–Fri) If you're excited about AI, Python, and real-world ML applications — apply now and join our team in Bharatpur! Job Types: Full-time, Permanent Pay: From ₹20,347.73 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Life insurance Paid sick time Provident Fund Location: Bharatpur, Rajasthan (Required) Work Location: In person

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]”

Posted 6 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Head of AI & ML Platforms Focus : Voice AI, NLP, Conversation Intelligence for Omnichannel Enterprise Sales Location : Sector 63, Gurugram, Haryana — Full-time, 100% In-Office Work Hours : 10:30 AM – 8:00 PM, Monday to Friday (2nd and 4th Saturdays off) Experience Required : 8–15 years in AI/ML, with 3+ years leading teams in voice, NLP, or conversation platforms Apply : careers@darwix.ai Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers contextual nudges, conversation scoring, and performance analytics in real time. Our suite of products includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant that provides in-the-moment coaching, summaries, and objection handling support Store Intel : A computer vision solution that transforms CCTV feeds into actionable insights for physical retail spaces Darwix AI is trusted by large enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty , and is backed by leading institutional and operator investors. We are expanding rapidly across India, the Middle East, and Southeast Asia. Role Overview We are seeking a highly experienced and technically strong Head of AI & ML Platforms to architect and lead the end-to-end AI systems powering our voice intelligence, NLP, and GenAI solutions. This is a leadership role that blends research depth with applied engineering execution. The ideal candidate will have deep experience in building and deploying voice-to-text pipelines, multilingual NLP systems, and production-grade inference workflows. The individual will be responsible for model design, accuracy benchmarking, latency optimization, infrastructure orchestration, and integration across our product suite. This is a critical leadership role with direct influence over product velocity, enterprise client outcomes, and future platform scalability. Key ResponsibilitiesVoice-to-Text (ASR) Architecture Lead the design and optimization of large-scale automatic speech recognition (ASR) pipelines using open-source and commercial frameworks (e.g., WhisperX, Deepgram, AWS Transcribe) Enhance speaker diarization, custom vocabulary accuracy, and latency performance for real-time streaming scenarios Build fallback ASR workflows for offline and batch mode processing Implement multilingual and domain-specific tuning, especially for Indian and GCC languages Natural Language Processing and Conversation Analysis Build NLP models for conversation segmentation, intent detection, tone/sentiment analysis, and call scoring Implement multilingual support (Hindi, Arabic, Tamil, etc.) with fallback strategies for mixed-language and dialectal inputs Develop robust algorithms for real-time classification of sales behaviors (e.g., probing, pitching, objection handling) Train and fine-tune transformer-based models (e.g., BERT, RoBERTa, DeBERTa) and sentence embedding models for text analytics GenAI and LLM Integration Design modular GenAI pipelines for nudging, summarization, and response generation using tools like LangChain, LlamaIndex, and OpenAI APIs Implement retrieval-augmented generation (RAG) architectures for contextual, accurate, and hallucination-resistant outputs Build prompt orchestration frameworks that support real-time sales coaching across channels Ensure safety, reliability, and performance of LLM-driven outputs across use cases Infrastructure and Deployment Lead the development of scalable, secure, and low-latency AI services deployed via FastAPI, TorchServe, or similar frameworks Oversee model versioning, monitoring, and retraining workflows using MLflow, DVC, or other MLOps tools Build hybrid inference systems for batch, real-time, and edge scenarios depending on product usage Optimize inference pipelines for GPU/CPU balance, resource scheduling, and runtime efficiency Team Leadership and Cross-functional Collaboration Recruit, manage, and mentor a team of machine learning engineers and research scientists Collaborate closely with Product, Engineering, and Customer Success to translate product requirements into AI features Own AI roadmap planning, sprint delivery, and KPI measurement Serve as the subject-matter expert for AI-related client discussions, sales demos, and enterprise implementation roadmaps Required Qualifications 8+ years of experience in AI/ML with a minimum of 3 years in voice AI, NLP, or conversational platforms Proven experience delivering production-grade ASR or NLP systems at scale Deep familiarity with Python, PyTorch, HuggingFace, FastAPI, and containerized environments (Docker/Kubernetes) Expertise in fine-tuning LLMs and building multi-language, multi-modal intelligence stacks Demonstrated experience with tools such as WhisperX, Deepgram, Azure Speech, LangChain, MLflow, or Triton Inference Server Experience deploying real-time or near real-time inference models at enterprise scale Strong architectural thinking with the ability to design modular, reusable, and scalable ML services Track record of building and leading high-performing ML teams Preferred Skills Background in telecom, contact center AI, conversational analytics, or field sales optimization Familiarity with GPU deployment, model quantization, and inference optimization Experience with low-resource languages and multilingual data augmentation Understanding of sales enablement workflows and domain-specific ontology development Experience integrating AI models into customer-facing SaaS dashboards and APIs Success Metrics Transcription accuracy improvement by ≥15% across core languages within 6 months End-to-end voice-to-nudge latency reduced below 5 seconds GenAI assistant adoption across 70%+ of eligible conversations AI-driven call scoring rolled out across 100% of Tier 1 clients within 9 months Model deployment velocity (dev to prod) reduced by ≥40% through tooling and process improvements Culture at Darwix AI At Darwix AI, we operate at the intersection of engineering velocity and product clarity. We move fast, prioritize outcomes over optics, and expect leaders to drive hands-on impact. You will work directly with the founding team and senior leaders across engineering, product, and GTM functions. Expect ownership, direct communication, and a culture that values builders who scale systems, people, and strategy. Compensation and Benefits Competitive fixed compensation Performance-based bonuses and growth-linked incentives ESOP eligibility for leadership candidates Access to GPU/compute credits and model experimentation infrastructure Comprehensive medical insurance and wellness programs Dedicated learning and development budget for technical and leadership upskilling MacBook Pro, premium workstation, and access to industry tooling licenses Career Progression 12-month roadmap: Build and stabilize AI platform across all product lines 18–24-month horizon: Elevate to VP of AI or Chief AI Officer as platform scale increases globally Future leadership role in enabling new verticals (e.g., healthcare, finance, logistics) with domain-specific GenAI solutions How to Apply Send the following to careers@darwix.ai : Updated CV (PDF format) A short statement (200 words max) on: “How would you design a multilingual voice-to-text pipeline optimized for low-resource Indic languages, with real-time nudge delivery?” Links to any relevant GitHub repos, publications, or deployed projects (optional) Subject Line : “Application – Head of AI & ML Platforms – [Your Name]”

Posted 6 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bharatpur, Rajasthan

On-site

✅ Location: Bharatpur (On-Site) ✅ Job Type: Full-Time ✅ Experience: 1–2 Years ✅ Salary: ₹25,000 – ₹35,000/month (based on skills & experience) ✅Job Description: We are looking for a passionate and skilled Python Developer with experience in Machine Learning (ML) and Large Language Models (LLMs) to join our on-site team in Bharatpur . You'll be responsible for developing intelligent tools and solutions using open-source ML/NLP frameworks. ✅Key Responsibilities: Build and deploy ML/LLM-based solutions using Python Work on NLP tasks such as text classification, summarization, and chatbot development Integrate and fine-tune pre-trained LLMs (Hugging Face, OpenAI, etc.) Use libraries like Transformers, LangChain, scikit-learn, or TensorFlow Handle data collection, cleaning, vectorization, and embeddings Create APIs using Flask or FastAPI for ML model deployment Collaborate with the product and engineering teams for real-world use cases ✅ Required Skills: 1–2 years of hands-on experience in Python and ML/NLP Knowledge of LLM frameworks like Hugging Face, LangChain, or OpenAI Experience with vector databases like FAISS or Pinecone (preferred) Familiarity with Transformers, embeddings, tokenization Understanding of REST APIs and integration Good problem-solving and debugging skills ✅Good to Have: Experience with chatbot frameworks or RAG pipelines Exposure to tools like Gradio, Streamlit for UI prototyping Version control using Git Docker/basic deployment skills ✅What We Offer: Competitive salary with performance-based growth Opportunity to work on innovative AI projects locally Learning-focused culture and skill development Supportive and collaborative work environment 5-day work week (Mon–Fri) If you're excited about AI, Python, and real-world ML applications — apply now and join our team in Bharatpur! Job Types: Full-time, Permanent Pay: From ₹20,347.73 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Life insurance Paid sick time Provident Fund Location: Bharatpur, Rajasthan (Required) Work Location: In person

Posted 6 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview We are looking for a high-impact Product Manager who thrives at the intersection of technology and pharma/life sciences . This role demands a sharp strategic thinker with hands-on technical depth , product ownership mindset , and a solid grasp of pharma domain knowledge — from primary market research (PMR) insights , competitive intelligence (CI) , to brand strategy . If you can translate brand/medical/commercial objectives into robust, scalable product solutions using AWS-native architectures , ML/GenAI models , and modern DevOps practices , you belong here. Key Responsibilities Product Leadership Own the end-to-end product lifecycle from discovery to launch across pharma/life sciences use cases. Translate unmet market and brand needs into differentiated, scalable, and user-centric product solutions. Prioritize features across platform modules by aligning commercial, medical, and data science needs. Partner with commercial, brand, and medical teams to translate PMR and CI into actionable product features. Technical & Platform Strategy Drive architectural discussions and product decisions around AWS cloud infrastructure , including Glue , Athena , Data Lake , S3 , Lambda , and Step Functions . Collaborate with engineering to ensure CI/CD pipelines , Docker , Kubernetes , and ML Ops practices are integrated for faster product iterations. Enable delivery of GenAI capabilities in the platform — from document intelligence, medical NLP, summarization to insight generation. Data & AI Productization Lead data strategy for ingesting, cleaning, and transforming EMR, Claims, HCP/HCO, and RWD data using PySpark , SQL , and data pipelines . Build roadmap around ML/GenAI-driven use cases: e.g., treatment pathway prediction, KOL segmentation, site recommendation, competitive tracking. Collaborate with data scientists to deploy models in production using APIs and cloud-native services. Market & Domain Expertise Leverage deep knowledge of pharma workflows (Medical Affairs, Market Access, Clinical Dev, Commercial Ops). Map out patient journeys, treatment landscapes, and brand objectives into platform features. Convert PMR data and CI signals into competitive positioning and product differentiation. Required Qualifications 6–8 years of experience in product management or technical product ownership. Strong experience in pharma or life sciences industry — ideally in commercial, medical, or clinical tech products. Proven hands-on experience with AWS cloud architecture , especially Glue, Athena, Data Lake, Step Functions. Proficient in Python , SQL , PySpark , and working knowledge of ML modeling & GenAI frameworks (LangChain, OpenAI, HuggingFace, etc.) . Strong grasp of DevOps pipelines (CI/CD, GitHub Actions/GitLab, Terraform, Docker, K8s) . Strong understanding of data engineering concepts — ingestion, normalization, feature engineering, and ML pipeline orchestration. Familiarity with primary market research methodologies , CI tools , and brand strategy in pharma. Preferred Skills Prior experience building SaaS or platform products in regulated industries. Knowledge of data privacy, HIPAA, and compliance frameworks.

Posted 6 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Principal Software Engineer – AI Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 6–10 years of hands-on development in AI/ML systems, with deep experience in shipping production-grade AI products Apply at : careers@darwix.ai Subject Line : Application – Principal Software Engineer – AI – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how large sales and CX teams operate across India, MENA, and Southeast Asia. We build deeply integrated conversational intelligence and agent assist tools that enable: Multilingual speech-to-text pipelines Real-time agent coaching AI-powered sales scoring Predictive analytics and nudges CRM and telephony integrations Our clients include leading enterprises like IndiaMart, Bank Dofar, Wakefit, GIVA, and Sobha , and our product is deeply embedded in the daily workflows of field agents, telecallers, and enterprise sales teams. We are backed by top VCs and built by alumni from IIT, IIM, and BITS with deep expertise in real-time AI, enterprise SaaS, and automation. Role Overview We are hiring a Principal Software Engineer – AI to lead the development of advanced AI features in our conversational intelligence suite. This is a high-ownership role that combines software engineering, system design, and AI/ML application delivery. You will work across our GenAI stack—including Whisper, LangChain, LLMs, audio streaming, transcript processing, NLP pipelines, and scoring models—to build robust, scalable, and low-latency AI modules that power real-time user experiences. This is not a research role. You will be building, deploying, and optimizing production-grade AI features used daily by thousands of sales agents and managers across industries. Key Responsibilities 1. AI System Architecture & Development Design, build, and optimize core AI modules such as: Multilingual speech-to-text (Whisper, Deepgram, Google STT) Prompt-based LLM workflows (OpenAI, open-source LLMs) Transcript post-processing: punctuation, speaker diarization, timestamping Real-time trigger logic for call nudges and scoring Build resilient pipelines using Python, FastAPI, Redis, Kafka , and vector databases 2. Production-Grade Deployment Implement GPU/CPU-optimized inference services for latency-sensitive workflows Use caching, batching, asynchronous processing, and message queues to scale real-time use cases Monitor system health, fallback workflows, and logging for ML APIs in live environments 3. ML Workflow Engineering Work with Head of AI to fine-tune, benchmark, and deploy custom models for: Call scoring (tone, compliance, product pitch) Intent recognition and sentiment classification Text summarization and cue generation Build modular services to plug models into end-to-end workflows 4. Integrations with Product Modules Collaborate with frontend, dashboard, and platform teams to serve AI output to users Ensure transcript mapping, trigger visualization, and scoring feedback appear in real-time in the UI Build APIs and event triggers to interface AI systems with CRMs, telephony, WhatsApp, and analytics modules 5. Performance Tuning & Optimization Profile latency and throughput of AI modules under production loads Implement GPU-aware batching, model distillation, or quantization where required Define and track key performance metrics (latency, accuracy, dropout rates) 6. Tech Leadership Mentor junior engineers and review AI system architecture, code, and deployment pipelines Set engineering standards and documentation practices for AI workflows Contribute to planning, retrospectives, and roadmap prioritization What We’re Looking For Technical Skills 6–10 years of backend or AI-focused engineering experience in fast-paced product environments Strong Python fundamentals with experience in FastAPI, Flask , or similar frameworks Proficiency in PyTorch , Transformers , and OpenAI API/LangChain Deep understanding of speech/text pipelines, NLP, and real-time inference Experience deploying LLMs and AI models in production at scale Comfort with PostgreSQL, MongoDB, Redis, Kafka, S3 , and Docker/Kubernetes System Design Experience Ability to design and deploy distributed AI microservices Proven track record of latency optimization, throughput scaling, and high-availability setups Familiarity with GPU orchestration, containerization, CI/CD (GitHub Actions/Jenkins), and monitoring tools Bonus Skills Experience working with multilingual STT models and Indic languages Knowledge of Hugging Face, Weaviate, Pinecone, or vector search infrastructure Prior work on conversational AI, recommendation engines, or real-time coaching systems Exposure to sales/CX intelligence platforms or enterprise B2B SaaS Who You Are A pragmatic builder—you don’t chase perfection but deliver what scales A systems thinker—you see across data flows, bottlenecks, and trade-offs A hands-on leader—you mentor while still writing meaningful code A performance optimizer—you love shaving off latency and memory bottlenecks A product-focused technologist—you think about UX, edge cases, and real-world impact What You’ll Impact Every nudge shown to a sales agent during a live customer call Every transcript that powers a manager’s coaching decision Every scorecard that enables better hiring and training at scale Every dashboard that shows what drives revenue growth for CXOs This role puts you at the intersection of AI, revenue, and impact —what you build is used daily by teams closing millions in sales across India and the Middle East. How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Principal Software Engineer – AI – [Your Name] (Optional): Include a brief note describing one AI system you've built for production—what problem it solved, what stack it used, and what challenges you overcame. If you're ready to lead the AI backbone of enterprise sales , build world-class systems, and drive real-time intelligence at scale— Darwix AI is where you belong.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

Remote

🧠 Job Title: Engineering Manager Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 7–12 Years Compensation: Competitive salary + ESOPs + Performance-based bonuses 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing AI-first startups, building next-gen conversational intelligence and real-time agent assist tools for sales teams globally. We’re transforming how enterprise sales happens across industries like BFSI, real estate, retail, and telecom with a GenAI-powered platform that combines multilingual transcription, NLP, real-time nudges, knowledge base integration, and performance analytics—all in one. Our clients include some of the biggest names in India, MENA, and SEA. We’re backed by marquee venture capitalists, 30+ angel investors, and operators from top AI, SaaS, and B2B companies. Our founding team comes from IITs, IIMs, BITS Pilani, and global enterprise AI firms. Now, we’re looking for a high-caliber Engineering Manager to help lead the next phase of our engineering evolution. If you’ve ever wanted to build and scale real-world AI systems for global use cases—this is your shot. 🎯 Role Overview As Engineering Manager at Darwix AI, you will be responsible for leading and managing a high-performing team of backend, frontend, and DevOps engineers. You will directly oversee the design, development, testing, and deployment of new features and system enhancements across Darwix’s AI-powered product suite. This is a hands-on technical leadership role , requiring the ability to code when needed, conduct architecture reviews, resolve blockers, and manage the overall engineering execution. You’ll work closely with product managers, data scientists, QA teams, and the founders to deliver on roadmap priorities with speed and precision. You’ll also be responsible for building team culture, mentoring developers, improving engineering processes, and helping the organization scale its tech platform and engineering capacity. 🔧 Key Responsibilities1. Team Leadership & Delivery Lead a team of 6–12 software engineers (across Python, PHP, frontend, and DevOps). Own sprint planning, execution, review, and release cycles. Ensure timely and high-quality delivery of key product features and platform improvements. Solve execution bottlenecks and ensure clarity across JIRA boards, product documentation, and sprint reviews. 2. Architecture & Technical Oversight Review and refine high-level and low-level designs proposed by the team. Provide guidance on scalable architectures, microservices design, performance tuning, and database optimization. Drive migration of legacy PHP code into scalable Python-based microservices. Maintain technical excellence across deployments, containerization, CI/CD, and codebase quality. 3. Hiring, Coaching & Career Development Own the hiring and onboarding process for engineers in your pod. Coach team members through 1:1s, OKRs, performance cycles, and continuous feedback. Foster a culture of ownership, transparency, and high-velocity delivery. 4. Process Design & Automation Drive adoption of agile development practices—daily stand-ups, retrospectives, sprint planning, documentation. Ensure production-grade observability, incident tracking, root cause analysis, and rollback strategies. Introduce quality metrics like test coverage, code review velocity, time-to-deploy, bug frequency, etc. 5. Cross-functional Collaboration Work closely with the product team to translate high-level product requirements into granular engineering plans. Liaise with QA, AI/ML, Data, and Infra teams to coordinate implementation across the board. Collaborate with customer success and client engineering for debugging and field escalations. 🔍 Technical Skills & Stack🔹 Primary Languages & Frameworks: Python (FastAPI, Flask, Django) PHP (legacy services; transitioning to Python) TypeScript, JavaScript, HTML5, CSS3 Mustache templates (preferred), React/Next.js (optional) 🔹 Databases & Storage: MySQL (primary), PostgreSQL MongoDB, Redis Vector DBs: Pinecone, FAISS, Weaviate (RAG pipelines) 🔹 AI/ML Integration: OpenAI APIs, Whisper, Wav2Vec, Deepgram Langchain, HuggingFace, LlamaIndex, LangGraph 🔹 DevOps & Infra: AWS EC2, S3, Lambda, CloudWatch Docker, GitHub Actions, Nginx Git (GitHub/GitLab), Jenkins (optional) 🔹 Monitoring & Testing: Prometheus, Grafana, Sentry PyTest, Selenium, Postman ✅ Candidate Profile👨‍💻 Experience: 7–12 years of total engineering experience in high-growth product companies or startups. At least 2 years of experience managing teams as a tech lead or engineering manager. Experience working on real-time data systems, microservices architecture, and SaaS platforms. 🎓 Education: Bachelor’s or Master’s degree in Computer Science or related field. Preferred background from Tier 1 institutions (IITs, BITS, NITs, IIITs). 💼 Traits We Love: You lead with clarity, ownership, and high attention to detail. You believe in building systems—not just shipping features. You are pragmatic and prioritize team delivery velocity over theoretical perfection. You obsess over latency, clean interfaces, and secure deployments. You want to build a high-performing tech org that scales globally. 🌟 What You’ll Get Leadership role in one of India’s top GenAI startups Competitive fixed compensation with performance bonuses Significant ESOPs tied to company milestones Transparent performance evaluation and promotion framework A high-speed environment where builders thrive Access to investor and client demos, roadshows, GTM huddles, and more Annual learning allowance and access to internal AI/ML bootcamps Founding-team-level visibility in engineering decisions and product innovation 🛠️ Projects You’ll Work On Real-time speech-to-text engine in 11 Indian languages AI-powered live nudges and agent assistance in B2B sales Conversation summarization and analytics for 100,000+ minutes/month Automated call scoring and custom AI model integration Multimodal input processing: audio, text, CRM, chat Custom knowledge graph integrations across BFSI, real estate, retail 📢 Why This Role Matters This is not just an Engineering Manager role. At Darwix AI, every engineering decision feeds directly into how real sales teams close deals. You’ll see your work powering real-time customer calls, nudging field reps in remote towns, helping CXOs make hiring decisions, and making a measurable impact on enterprise revenue. You’ll help shape the core technology platform of a company that’s redefining how humans and machines interact in sales. 📩 How to Apply Email your resume, GitHub/portfolio (if any), and a few lines on why this role excites you to: 📧 people@darwix.ai Subject: Application – Engineering Manager – [Your Name] If you’re a technical leader who thrives on velocity, takes pride in mentoring developers, and wants to ship mission-critical AI systems that power revenue growth across industries, this is your stage . Join Darwix AI. Let’s build something that lasts.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨‍💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑‍💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.

Posted 1 week ago

Apply

6.0 - 8.0 years

2 - 6 Lacs

Chennai

On-site

Overview We are looking for a high-impact Product Manager who thrives at the intersection of technology and pharma/life sciences . This role demands a sharp strategic thinker with hands-on technical depth , product ownership mindset , and a solid grasp of pharma domain knowledge — from primary market research (PMR) insights , competitive intelligence (CI) , to brand strategy . If you can translate brand/medical/commercial objectives into robust, scalable product solutions using AWS-native architectures , ML/GenAI models , and modern DevOps practices , you belong here. Key Responsibilities Product Leadership Own the end-to-end product lifecycle from discovery to launch across pharma/life sciences use cases. Translate unmet market and brand needs into differentiated, scalable, and user-centric product solutions. Prioritize features across platform modules by aligning commercial, medical, and data science needs. Partner with commercial, brand, and medical teams to translate PMR and CI into actionable product features. Technical & Platform Strategy Drive architectural discussions and product decisions around AWS cloud infrastructure , including Glue , Athena , Data Lake , S3 , Lambda , and Step Functions . Collaborate with engineering to ensure CI/CD pipelines , Docker , Kubernetes , and ML Ops practices are integrated for faster product iterations. Enable delivery of GenAI capabilities in the platform — from document intelligence, medical NLP, summarization to insight generation. Data & AI Productization Lead data strategy for ingesting, cleaning, and transforming EMR, Claims, HCP/HCO, and RWD data using PySpark , SQL , and data pipelines . Build roadmap around ML/GenAI-driven use cases: e.g., treatment pathway prediction, KOL segmentation, site recommendation, competitive tracking. Collaborate with data scientists to deploy models in production using APIs and cloud-native services. Market & Domain Expertise Leverage deep knowledge of pharma workflows (Medical Affairs, Market Access, Clinical Dev, Commercial Ops). Map out patient journeys, treatment landscapes, and brand objectives into platform features. Convert PMR data and CI signals into competitive positioning and product differentiation. Required Qualifications 6–8 years of experience in product management or technical product ownership. Strong experience in pharma or life sciences industry — ideally in commercial, medical, or clinical tech products. Proven hands-on experience with AWS cloud architecture , especially Glue, Athena, Data Lake, Step Functions. Proficient in Python , SQL , PySpark , and working knowledge of ML modeling & GenAI frameworks (LangChain, OpenAI, HuggingFace, etc.) . Strong grasp of DevOps pipelines (CI/CD, GitHub Actions/GitLab, Terraform, Docker, K8s) . Strong understanding of data engineering concepts — ingestion, normalization, feature engineering, and ML pipeline orchestration. Familiarity with primary market research methodologies , CI tools , and brand strategy in pharma. Preferred Skills Prior experience building SaaS or platform products in regulated industries. Knowledge of data privacy, HIPAA, and compliance frameworks.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

We are seeking a Fractional AI Data Scientist with deep healthcare analytics experience to support the design of agentic AI workflows , build LLM-powered tools , and structure data pipelines from EHRs, payer systems, and clinical sources. Your work will power intelligent automations for Eligibility Verification , Pre-Authorization , Risk Stratification , and more. You’ll work closely with solution architects, automation engineers, and clinical SMEs to ensure healthcare data is structured, insightful, and responsibly applied in AI contexts. 📌 *Key Responsibilities* Build and fine-tune AI/ML/NLP models tailored to healthcare datasets (structured & unstructured). Design intelligent prompts and evaluation pipelines using LLMs (OpenAI, Azure OpenAI). Work with healthcare data from Epic, Cerner, Availity, and claims sources to build actionable insights. Partner with Azure engineers or Workato specialists to build data-driven agentic workflows. Cleanse and transform healthcare data (FHIR, HL7, CSV, SQL) for modeling and automation triggers. Ensure all solutions comply with HIPAA and ethical AI best practices. Visualize outcomes for business and clinical teams, and document models for reuse. 🧠 *Required Skills & Experience* 5+ years in data science with at least 2+ in healthcare-specific roles. Experience with clinical data (EHR, EMR, payer claims) and healthcare ontologies (ICD-10, CPT, FHIR). Hands-on with LLM tools (OpenAI, LangChain, RAG frameworks) for classification, summarization, or chatbot use cases. Strong proficiency in Python, SQL, Pandas, and ML/NLP frameworks. Familiarity with PHI/PII handling and compliance frameworks like HIPAA. ⭐ *Preferred Qualifications* Azure AI stack (OpenAI, Data Factory, Synapse) Experience in conversational AI, intake automation, or clinical note summarization Worked in or with a digital health, healthtech, or AI startup environment Understanding of automation platforms (Workato, Power Automate) 🛠️ *Tech Stack* Languages: Python, SQL, PySpark AI/ML: Scikit-learn, OpenAI, Hugging Face, LangChain, Transformers Data: Azure Data Factory, Snowflake, BigQuery, Postgres Integration: FHIR APIs, REST APIs, Postman Visualization: Power BI, Streamlit, Tableau Compliance: HIPAA, De-ID, RBAC

Posted 1 week ago

Apply

3.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description We are looking for a Python Backend Engineer with exposure to AI engineering to join our team in building a scalable, cognitive data platform. This platform will crawl and process unstructured data sources, enabling intelligent data extraction and analysis. The ideal candidate will have deep expertise in backend development using FastAPI, RESTful APIs, SQL, and Azure data technologies, with a secondary focus on integrating AI/ML capabilities into the product. Core Responsibilities Design and develop high-performance backend services using Python (FastAPI). Develop RESTful APIs to support data ingestion, transformation, and AI-based feature access. Work closely with DevOps and data engineering teams to integrate backend services with Azure data pipelines and databases. Manage database schemas, write complex SQL queries, and support ETL processes using Python-based tools. Build secure, scalable, and production-ready services following best practices in logging, authentication, and observability. Implement background tasks and async event-driven workflows for data crawling and processing. AI Engineering Contributions : Support integration of AI models (NLP, summarization, information retrieval) within backend APIs. Collaborate with AI team to deploy lightweight inference pipelines using PyTorch, TensorFlow, or ONNX. Participate in training data pipeline design and minor model fine-tuning as needed for business logic. Contribute to the testing, logging, and monitoring of AI agent behavior in production environments. Qualifications 3+ years of experience in Python backend development, with strong experience in FastAPI or equivalent frameworks. Solid understanding of RESTful API design, asynchronous programming, and web application architecture. Proficiency in working with relational databases (e.g., PostgreSQL, MS SQL Server) and Azure cloud services. Experience with ETL workflows, job scheduling, and data pipeline orchestration (Airflow, Prefect, etc.). Exposure to machine learning libraries (e.g., Scikit-learn, Transformers, OpenAI APIs) is a plus. Familiarity with containerization (Docker), CI/CD practices, and performance tuning. A mindset of code quality, scalability, documentation, and collaboration. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Pune, Maharashtra, India

On-site

We are looking for a highly skilled AI/ML/Gen AI Data Scientist with expertise in Generative AI, Machine Learning, Deep Learning, and Natural Language Processing (NLP) . The ideal candidate should have a strong foundation in Python-based AI frameworks and experience in developing, deploying, and optimizing AI models for real-world applications. Key Responsibilities Develop and implement AI/ML models . Work with Deep Learning architectures like Transformers (BERT, GPT, LLaMA) and CNNs/RNNs. Fine-tune and optimize Large Language Models (LLMs) for various applications. Design and train custom Machine Learning models using structured and unstructured data. Leverage NLP techniques such as text summarization, Named Entity Recognition (NER). Implement ML pipelines and deploy models in cloud environments (AWS/GCP/Azure). Collaborate with cross-functional teams to integrate AI-driven solutions into business applications. Stay updated with latest AI advancements and apply innovative techniques to improve model performance. Required Skills & Qualifications 1 to 3 years of experience in AI/ML, Deep Learning, and Generative AI . Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, Hugging Face, Scikit-learn . Hands-on experience with NLP models , including BERT, GPT, T5, LLaMA, and Stable Diffusion . Expertise in data preprocessing, feature engineering, and model evaluation . Experience with MLOps, cloud-based AI deployment, and containerization (Docker, Kubernetes) . Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. Ability to fine-tune LLMs and work with prompt engineering . Strong problem-solving skills and ability to work in agile environments . Educational Requirements Bachelor's, Master's, or PhD in Computer Science/Artificial Intelligence/Information Technology Preferred Skills (Good To Have) Experience with Reinforcement Learning (RLHF) and multi-modal AI . Familiarity with AutoML, Responsible AI (RAI), and AI Ethics . Exposure to Graph Neural Networks (GNNs) and time-series forecasting. Contributions to open-source AI projects or research papers. Skills:- Machine Learning (ML), Generative AI, Artificial Intelligence (AI), Python and Large Language Models (LLM)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Nagercoil, Tamil Nadu, India

On-site

Job Title: Bilingual Content Writer (Tamil & English) – Politics & Research Focus Location: Nagercoil, Tamil Nadu Company: Chasseur Cyber Solutions Job Type: Full-Time Experience Required: 1–3 Years Salary: Based on skills and experience Job Summary: We are seeking a highly skilled and research-driven Content Writer who is fluent in both Tamil and English , with a strong grip on Tamil Nadu politics , current affairs, and trending social issues. The ideal candidate should be experienced in crafting social media and long-form content, capable of handling both creative and analytical writing with political sensitivity and factual accuracy. Key Responsibilities: Write and publish original, well-researched content in both Tamil and English Create news-style articles, social media posts, scripts, political commentary, and opinion pieces Analyze ongoing political developments , government schemes, public reactions, and election trends Develop content tailored to Twitter, Instagram, Facebook, YouTube , etc., with a political or news-oriented angle Provide fact-checked research support for articles, videos, and campaign materials Collaborate with multimedia teams for trolls, memes, reels, and infographics Prepare content calendars , hashtag plans, and storyboards around trending issues and current events Ensure accuracy, neutrality (when needed), and compliance with platform and legal guidelines Track public sentiment and media narratives to create responsive or viral content Requirements: Strong command over both Tamil and English (writing, reading, and editing) Proven experience in news writing, journalism, political content, or research analysis Good understanding of Tamil Nadu politics , social structures, and public discourse Excellent research, fact-checking, and summarization skills Experience in writing for digital media, SEO-based blogs, and social platforms Creativity, speed, and the ability to write under tight deadlines Familiarity with tools like Google Trends, Twitter/X analytics, and content schedulers is a plus Bonus Points For: Background in journalism, mass communication, or political science Experience creating or managing political troll videos, memes, or satire content Knowledge of PR campaigns, media influence, and sentiment management To Apply: 📧 Send your resume and writing samples (Tamil & English) to: hr@chasseurcybersolutions.com 📱 WhatsApp: +91-9385591455

Posted 1 week ago

Apply

2.0 years

0 Lacs

Haryana

On-site

Provectus helps companies adopt ML/AI to transform the ways they operate, compete, and drive value. The focus of the company is on building ML Infrastructure to drive end-to-end AI transformations, assisting businesses in adopting the right AI use cases, and scaling their AI initiatives organization-wide in such industries as Healthcare & Life Sciences, Retail & CPG, Media & Entertainment, Manufacturing, and Internet businesses. We are seeking a highly skilled Machine Learning (ML) Tech Lead with a strong background in Large Language Models (LLMs) and AWS Cloud services. The ideal candidate will oversee the development and deployment of cutting-edge AI solutions while managing a team of 5-10 engineers. This leadership role demands hands-on technical expertise, strategic planning, and team management capabilities to deliver innovative products at scale. Responsibilities: Leadership & Management Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment; Drive the roadmap for machine learning projects aligned with business goals; Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery. Machine Learning & LLM Expertise Design, develop, and fine-tune LLMs and other machine learning models to solve business problems; Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction; Stay ahead of advancements in LLMs and apply emerging technologies; Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML. AWS Cloud Expertise Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.); Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS; Ensure best practices in security, monitoring, and compliance within the cloud infrastructure. Technical Execution Oversee the entire ML lifecycle, from research and experimentation to production and maintenance; Implement MLOps and LLMOps practices to streamline model deployment and CI/CD workflows; Debug, troubleshoot, and optimize production ML models for performance. Team Development & Communication Conduct regular code reviews and ensure engineering standards are upheld; Facilitate professional growth and learning for the team through continuous feedback and guidance; Communicate progress, challenges, and solutions to stakeholders and senior leadership. Qualifications: Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models); Strong expertise in AWS Cloud Services; Strong experience in ML/AI, including at least 2 years in a leadership role; Hands-on experience with Python, TensorFlow/PyTorch, and model optimization; Familiarity with MLOps tools and best practices; Excellent problem-solving and decision-making abilities; Strong communication skills and the ability to lead cross-functional teams; Passion for mentoring and developing engineers.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Position Overview We are seeking a meticulous Legal Operations Intern to handle day-to-day legal operations including NDA management, contract review, document abstraction, and redlining support for ongoing projects. Key Responsibilities Review, negotiate, and manage Non-Disclosure Agreements (NDAs) Perform contract abstraction and create executive summaries Provide redlining support for various legal documents and agreements Summarize complex legal documents for internal stakeholders Assist in contract lifecycle management Support ongoing projects with legal documentation review Maintain contract databases and tracking systems Coordinate with internal teams on legal requirements for projects Required Qualifications Currently pursuing LLB or LLM degree (2nd year onwards preferred) Strong contract review and negotiation skills Experience with redlining and document markup Excellent summarization and abstraction abilities Proficiency in legal document management systems Knowledge of commercial contracts and corporate law Attention to detail and ability to work under tight deadlines Proficiency in MS Office Suite and PDF editing tools

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Jaipur

On-site

Unlock yourself. Take your career to the next level. At Atrium, we live and deliver at the intersection of industry strategy, intelligent platforms, and data science — empowering our customers to maximize the power of their data to solve their most complex challenges. We have a unique understanding of the role data plays in the world today and serve as market leaders in intelligent solutions. Our data-driven, industry-specific approach to business transformation for our customers places us uniquely in the market. Who are you? You are smart, collaborative and take ownership to get things done. You love to learn and are intellectually curious in business and technology tools, platforms and languages. You are energized by solving complex problems and bored when you don’t have something to do. You love working in teams, and are passionate about pulling your weight to make sure the team succeeds. What will you be doing at Atrium? We are seeking a highly skilled AI QA Consultant to ensure the quality, functionality, and performance of our AI solutions. This role is responsible for testing across the Salesforce platform with a specialised focus on validating AI-powered features, agentic workflows, and integrations with Large Language Models (LLMs). This position bridges traditional Salesforce QA with modern AI testing principles, requiring a deep understanding of Salesforce best practices, a keen eye for detail, and the technical acumen to test complex, non-deterministic systems. You will work closely with AI developers, Salesforce administrators, and business stakeholders to deliver robust and reliable AI-driven user experiences. The QA Consultant will: Conduct thorough manual and exploratory testing of AI features on the Salesforce platform. You will identify, document, and track defects in Jira, clearly distinguishing between standard configuration bugs and nuanced, probabilistic issues inherent in AI model outputs Take ownership of data quality for the entire AI fine-tuning lifecycle. You will be responsible for validating and helping prepare the high-quality, unbiased datasets that are essential for building reliable and effective enterprise AI Develop detailed test plans, test cases, and test scripts specifically designed to evaluate the accuracy, relevance, safety, and performance of LLM responses and AI-driven logic Collaborate with product owners and AI developers to understand requirements and design comprehensive test strategies for Salesforce solutions that incorporate AI components like intelligent chatbots, RAG-based knowledge retrieval, automated case summarization, and agentic process automation Create scenarios to test for prompt injection vulnerabilities, model hallucinations, data grounding issues, and bias in AI outputs Assess the performance, latency, and resource consumption of AI features to ensure they meet user experience benchmarks Facilitate User Acceptance Testing (UAT) for AI-powered features. Guides business users on how to evaluate and provide feedback on intelligent systems Stay updated with the latest advancements in LLMs, prompt engineering techniques, and AI testing methodologies, and champion the adoption of best practices In this role, you will have: Strong foundation with 2-3 years of dedicated QA experience on the Salesforce platform. It is crucial for validating how AI features interact with core Salesforce objects, data models, and business workflows A solid understanding of modern AI/NLP concepts, including Retrieval-Augmented Generation (RAG), model fine-tuning, and the principles of creating feedback loops for reinforcement learning (e.g., RLHF) Proficiency in Python, used for writing test scripts, data manipulation, and interacting with AI APIs Hands-on experience testing applications built with LLM APIs (like OpenAI, Gemini), frameworks such as LangChain/LlamaIndex, and various Vector Databases Working knowledge of SQL, NoSQL, and vector databases Excellent analytical and communication skills, with experience thriving in an Agile/Scrum environment Next Steps Our recruitment process is highly personalized. Some candidates complete the hiring process in one week, others may take longer as it’s important we find the right position for you. It's all about timing and can be a journey as we continue to learn about one another. We want to get to know you and encourage you to be selective - after all, deciding to join a company is a big decision! At Atrium, we believe a diverse workforce allows us to match our growth ambitions and drive inclusion across the business. We are an equal opportunity employer and all qualified applicants will receive consideration for employment.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Andhra Pradesh

Remote

About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Role Title: Machine Learning Analyst Position Summary: We value our talented employees, and whenever possible strive to help one of our associates grow professionally before recruiting new talent to our open positions. If you think the open position you see is right for you, we encourage you to apply! Our people make all the difference in our success. We leverage cutting edge Artificial Intelligence (AI) and Machine Learning (ML) algorithms to develop solutions for automated document processing and customer service chat bots. We are looking for AI Prompt Engineers with strong engineering, full stack expertise to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions. Extreme focus on speed to market and getting Products and Services in the hands of customer and passion to transform healthcare is key to the success of this role. Build scalable software solutions using LLM’s and other ML models to solve challenges in healthcare. Build enterprise grade AI solutions with focus on privacy, security, fairness. Work with Product Development as a Generative Artificial Intelligence (AI) subject matter expert and architect and develop scalable, resilient, ethical AI solutions Strong engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats as required – This is critical so that the AI output can be consumed as is and displayed on the dashboard for accelerated development cycles. Build extensible API Integrations, low code UI / UX solutions, with extremely short cycle times, to extract information from sources, integrate with GPT4, receive insights and make them available in intuitive, high performing dashboards Build solutions that align with responsible AI practices. Envision the solution outcomes to solve for the business problem with actionable insights and design viable solutions to meet the outcomes. Understand how AI is interpreting the data set and use that understanding to build prompts that lead to expected outcomes Architect and develop software or infrastructure for scalable, distributed systems and with machine learning technologies. Work with frameworks(Tensorflow, PyTorch) and open source platforms like Hugging Face to deliver the best solutions Optimize existing generative AI models for improved performance, scalability, and efficiency. Develop and maintain AI pipelines, including data preprocessing, feature extraction, model training, and evaluation. Develop clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. Contribute to the establishment of best practices and standards for generative AI development within the organization. Experience Required: Overall 1-3 years of experience 1+ years of Full stack engineering expertise with languages like C#, Python and Proficiency in designing architecture, building API Integrations, configuring and deploying cloud services, setting up authentication, monitoring and logging Experience in implementing enterprise systems in production setting for AI, computer vision, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus. Experience with information storage/retrieval using vector databases like pinecone. Strong understanding and exposure in natural language generation or Gen AI like transformers, LLM’s, text embedding’s. Experience with designing scalable software systems for classification, text extraction/summary, data connectors for different formats (pdf, csv, doc, etc) Experience with machine learning libraries and frameworks such as PyTorch or TensorFlow, Hugging Face, Lang chain, Llama Index. 1+ years of experience working in a complex, matrixed organization involving cross-functional or cross-business projects. Programming experience in C / C++, Java, Python. Strong knowledge of data structures, algorithms, and software engineering principles. Excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Experience Desired: Familiarity with cloud-based platforms and services, such as AWS, GCP, or Azure. Hands on experience on tools / products like Dynatrace, Splunk, Service Now preferred. Experience in identifying KPI on solution build using Generative AI, for Ex – RAG based Q&A, data summarization, text content generation, data extraction, etc. will be a big advantage. Understanding of standards and regulation related to use of GenAI for a US based Health Insurance provider preferred. Healthcare experience (preferred but not mandate) Education and Training Required: Degree in Computer Science, Artificial Intelligence, or a related field. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required. Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for a highly skilled AI/ML/Gen AI Data Scientist with expertise in Generative AI, Machine Learning, Deep Learning, and Natural Language Processing (NLP) . The ideal candidate should have a strong foundation in Python-based AI frameworks and experience in developing, deploying, and optimizing AI models for real-world applications. Key Responsibilities: Develop and implement AI/ML models . Work with Deep Learning architectures like Transformers (BERT, GPT, LLaMA) and CNNs/RNNs. Fine-tune and optimize Large Language Models (LLMs) for various applications. Design and train custom Machine Learning models using structured and unstructured data. Leverage NLP techniques such as text summarization, Named Entity Recognition (NER). Implement ML pipelines and deploy models in cloud environments (AWS/GCP/Azure). Collaborate with cross-functional teams to integrate AI-driven solutions into business applications. Stay updated with latest AI advancements and apply innovative techniques to improve model performance. Required Skills & Qualifications: 1 to 3 years of experience in AI/ML, Deep Learning, and Generative AI . Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, Hugging Face, Scikit-learn . Hands-on experience with NLP models , including BERT, GPT, T5, LLaMA, and Stable Diffusion . Expertise in data preprocessing, feature engineering, and model evaluation . Experience with MLOps, cloud-based AI deployment, and containerization (Docker, Kubernetes) . Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. Ability to fine-tune LLMs and work with prompt engineering . Strong problem-solving skills and ability to work in agile environments . Educational Requirements: Bachelor's, Master's, or PhD in Computer Science/Artificial Intelligence/Information Technology Preferred Skills (Good to Have): Experience with Reinforcement Learning (RLHF) and multi-modal AI . Familiarity with AutoML, Responsible AI (RAI), and AI Ethics . Exposure to Graph Neural Networks (GNNs) and time-series forecasting. Contributions to open-source AI projects or research papers.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies