Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Position: Medical Summarization Qualification :- CPC Certification/ Non CPC Certification with Medical Graduation Salary : Upto Rs2.16LPA Location : Ernakulam Job Mode - Work from Office Immediate Joining Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We’re building the next generation of AI-powered customer success. Our platform ingests and interprets real-time product usage data, conversational history (emails, chats, meetings), and other customer signals to proactively drive customer engagement, reduce churn, and increase satisfaction. If you’re excited about solving complex AI challenges with real-world impact and working at the intersection of machine learning, generative AI, and applied data science—this is the role for you. What You'll Do Lead the design and development of AI systems that analyze product usage and conversational data to generate intelligent, real-time customer insights. Research and prototype machine learning and generative AI models for: Customer intent prediction Churn and risk modelling Conversational summarization and generation Behaviour driven engagement automation Collaborate closely with engineering, product, and data teams to deploy AI models into production pipelines. Contribute to model evaluation, bias mitigation, and continuous improvement strategies. Stay current with AI/ML research and suggest applicable innovations to keep our product ahead of the curve. What We're Looking For Experience: 4+ years in AI/ML research or applied roles, ideally in a SaaS or customer-centric product environment. Expertise in: Natural Language Processing (NLP) and Conversational AI (e.g., summarization, dialogue generation) Machine Learning for time-series, behavioral analytics, and customer modeling GenAI frameworks (OpenAI, Hugging Face, LangChain, or similar) Strong understanding of data pipelines, feature engineering, and model evaluation. Experience working with large-scale unstructured and structured data. Demonstrated ability to convert research into deployable, scalable solutions. M.S. or Ph.D. in Computer Science, AI, Data Science, or a related field. Good to have: Experience with real-time data systems or event-driven architectures (e.g., Kafka, Pub/Sub). Prior work in Customer Success tech, CRM systems, or marketing automation platforms. Contributions to open-source AI/ML projects or published research papers. Show more Show less
Posted 3 weeks ago
0.0 - 2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description A motivated Life Science graduate with 0-2 years of experience, preferably in medical records reviewing/summarization or medical content writing. In this role, you will be responsible for analyzing and summarizing medical records to support case evaluations, ensuring accuracy and adherence to timelines. On-site work opportunity in our Chennai office. India compensation is based upon the local competitive market. Responsibilities Review and summarize medical records with attention to detail. Identify key data points and compile concise summaries. Collaborate with team members to ensure timely completion of cases. Maintain confidentiality and comply with medical record handling standards. Qualifications Bachelor's degree in Life Sciences or related field. 0-2 years of experience in medical records review or summarization (preferred). Strong analytical and written communication skills. Familiarity with medical terminology is a plus. Our Cultural Values Entrepreneurs At Heart, We Are a Customer First Team Sharing One Goal And One Vision. We Seek Team Members Who Are Humble - No one is above another; we all work together to meet our clients’ needs and we acknowledge our own weaknesses Hungry - We all are driven internally to be successful and to continually expand our contribution and impact Smart - We use emotional intelligence when working with one another and with clients Our culture shapes our actions, our products, and the relationships we forge with our customers. Who We Are KLDiscovery provides technology-enabled services and software to help law firms, corporations, government agencies and consumers solve complex data challenges. The company, with offices in 26 locations across 17 countries, is a global leader in delivering best-in-class eDiscovery, information governance and data recovery solutions to support the litigation, regulatory compliance, internal investigation and data recovery and management needs of our clients. Serving clients for over 30 years, KLDiscovery offers data collection and forensic investigation, early case assessment, electronic discovery and data processing, application software and data hosting for web-based document reviews, and managed document review services. In addition, through its global Ontrack Data Recovery business, KLDiscovery delivers world-class data recovery, email extraction and restoration, data destruction and tape management. KLDiscovery has been recognized as one of the fastest growing companies in North America by both Inc. Magazine (Inc. 5000) and Deloitte (Deloitte’s Technology Fast 500). Additionally, KLDiscovery is an Orange-level Relativity Best in Service Partner, a Relativity Premium Hosting Partner and maintains ISO/IEC 27001 Certified data centers. KLDiscovery is an Equal Opportunity Employer. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Title: Data Scientist/Senior Data Scientist Location: Bangalore/Mumbai/Gurgaon/Chennai/Pune/Noida/Hyderabad Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Title: Data Scientist/Senior Data Scientist Location: Bangalore/Mumbai/Gurgaon/Chennai/Pune/Noida/Hyderabad Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description You are a strategic thinker passionate about driving solutions in “Data Science ”. You have found the right team. As a Data Science professional within our “ Asset Management team” , you will spend each day defining, refining and delivering set goals for our firm The Asset Management Data Science team is focused on enhancing and facilitating various steps in the investment process ranging from financial analysis and portfolio management to client services and advisory. You will utilize a large collection of textual data including financial documents, analyst reports, news, meeting notes and client communications along with more typical structured datasets. You will apply the latest methodologies to generate actionable insights to be directly consumed by our business partners. About Are you excited about using data science and machine learning to make a real impact in the asset management industry? Do you enjoy working with cutting-edge technologies and collaborating with a team of dedicated professionals? If so, the Data Science team at JP Morgan Asset Management could be the perfect fit for you. Here’s why: Real-World Impact: Your work will directly contribute to improving investment process and enhancing client experiences and operational process, making a tangible difference in our asset management business. Collaborative Environment: Join a team that values collaboration and teamwork. You’ll work closely with business stakeholders and technologists to develop and implement effective solutions. Continuous Learning: We support your professional growth by providing opportunities to learn and experiment with the latest data science and machine learning techniques. Job Responsibilities Collaborate with internal stakeholders to identify business needs and develop NLP/ML solutions that address client needs and drive transformation. Apply large language models (LLMs), machine learning (ML) techniques, and statistical analysis to enhance informed decision-making and improve workflow efficiency, which can be utilized across investment functions, client services, and operational process. Collect and curate datasets for model training and evaluation. Perform experiments using different model architectures and hyperparameters, determine appropriate objective functions and evaluation metrics, and run statistical analysis of results. Monitor and improve model performance through feedback and active learning. Collaborate with technology teams to deploy and scale the developed models in production. Deliver written, visual, and oral presentation of modeling results to business and technical stakeholders. Stay up-to-date with the latest research in LLM, ML and data science. Identify and leverage emerging techniques to drive ongoing enhancement. Required Qualifications, Capabilities, And Skills Advanced degree (MS or PhD) in a quantitative or technical discipline or significant practical experience in industry. Minimum of 8 years of experience in applying NLP, LLM and ML techniques in solving high-impact business problems, such as semantic search, information extraction, question answering, summarization, personalization, classification or forecasting. Advanced python programming skills with experience writing production quality code Good understanding of the foundational principles and practical implementations of ML algorithms such as clustering, decision trees, gradient descent etc. Hands-on experience with deep learning toolkits such as PyTorch, Transformers, HuggingFace. Strong knowledge of language models, prompt engineering, model finetuning, and domain adaptation. Familiarity with latest development in deep learning frameworks. Ability to communicate complex concepts and results to both technical and business audiences. Preferred Qualifications, Capabilities, And Skills Prior experience in an Asset Management line of business Exposure to distributed model training, and deployment Familiarity with techniques for model explainability and self validation About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals. Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]” Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
India
On-site
Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less
Posted 3 weeks ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Head of AI & ML Platforms Focus : Voice AI, NLP, Conversation Intelligence for Omnichannel Enterprise Sales Location : Sector 63, Gurugram, Haryana — Full-time, 100% In-Office Work Hours : 10:30 AM – 8:00 PM, Monday to Friday (2nd and 4th Saturdays off) Experience Required : 8–15 years in AI/ML, with 3+ years leading teams in voice, NLP, or conversation platforms Apply : careers@darwix.ai Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers contextual nudges, conversation scoring, and performance analytics in real time. Our suite of products includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant that provides in-the-moment coaching, summaries, and objection handling support Store Intel : A computer vision solution that transforms CCTV feeds into actionable insights for physical retail spaces Darwix AI is trusted by large enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty , and is backed by leading institutional and operator investors. We are expanding rapidly across India, the Middle East, and Southeast Asia. Role Overview We are seeking a highly experienced and technically strong Head of AI & ML Platforms to architect and lead the end-to-end AI systems powering our voice intelligence, NLP, and GenAI solutions. This is a leadership role that blends research depth with applied engineering execution. The ideal candidate will have deep experience in building and deploying voice-to-text pipelines, multilingual NLP systems, and production-grade inference workflows. The individual will be responsible for model design, accuracy benchmarking, latency optimization, infrastructure orchestration, and integration across our product suite. This is a critical leadership role with direct influence over product velocity, enterprise client outcomes, and future platform scalability. Key ResponsibilitiesVoice-to-Text (ASR) Architecture Lead the design and optimization of large-scale automatic speech recognition (ASR) pipelines using open-source and commercial frameworks (e.g., WhisperX, Deepgram, AWS Transcribe) Enhance speaker diarization, custom vocabulary accuracy, and latency performance for real-time streaming scenarios Build fallback ASR workflows for offline and batch mode processing Implement multilingual and domain-specific tuning, especially for Indian and GCC languages Natural Language Processing and Conversation Analysis Build NLP models for conversation segmentation, intent detection, tone/sentiment analysis, and call scoring Implement multilingual support (Hindi, Arabic, Tamil, etc.) with fallback strategies for mixed-language and dialectal inputs Develop robust algorithms for real-time classification of sales behaviors (e.g., probing, pitching, objection handling) Train and fine-tune transformer-based models (e.g., BERT, RoBERTa, DeBERTa) and sentence embedding models for text analytics GenAI and LLM Integration Design modular GenAI pipelines for nudging, summarization, and response generation using tools like LangChain, LlamaIndex, and OpenAI APIs Implement retrieval-augmented generation (RAG) architectures for contextual, accurate, and hallucination-resistant outputs Build prompt orchestration frameworks that support real-time sales coaching across channels Ensure safety, reliability, and performance of LLM-driven outputs across use cases Infrastructure and Deployment Lead the development of scalable, secure, and low-latency AI services deployed via FastAPI, TorchServe, or similar frameworks Oversee model versioning, monitoring, and retraining workflows using MLflow, DVC, or other MLOps tools Build hybrid inference systems for batch, real-time, and edge scenarios depending on product usage Optimize inference pipelines for GPU/CPU balance, resource scheduling, and runtime efficiency Team Leadership and Cross-functional Collaboration Recruit, manage, and mentor a team of machine learning engineers and research scientists Collaborate closely with Product, Engineering, and Customer Success to translate product requirements into AI features Own AI roadmap planning, sprint delivery, and KPI measurement Serve as the subject-matter expert for AI-related client discussions, sales demos, and enterprise implementation roadmaps Required Qualifications 8+ years of experience in AI/ML with a minimum of 3 years in voice AI, NLP, or conversational platforms Proven experience delivering production-grade ASR or NLP systems at scale Deep familiarity with Python, PyTorch, HuggingFace, FastAPI, and containerized environments (Docker/Kubernetes) Expertise in fine-tuning LLMs and building multi-language, multi-modal intelligence stacks Demonstrated experience with tools such as WhisperX, Deepgram, Azure Speech, LangChain, MLflow, or Triton Inference Server Experience deploying real-time or near real-time inference models at enterprise scale Strong architectural thinking with the ability to design modular, reusable, and scalable ML services Track record of building and leading high-performing ML teams Preferred Skills Background in telecom, contact center AI, conversational analytics, or field sales optimization Familiarity with GPU deployment, model quantization, and inference optimization Experience with low-resource languages and multilingual data augmentation Understanding of sales enablement workflows and domain-specific ontology development Experience integrating AI models into customer-facing SaaS dashboards and APIs Success Metrics Transcription accuracy improvement by ≥15% across core languages within 6 months End-to-end voice-to-nudge latency reduced below 5 seconds GenAI assistant adoption across 70%+ of eligible conversations AI-driven call scoring rolled out across 100% of Tier 1 clients within 9 months Model deployment velocity (dev to prod) reduced by ≥40% through tooling and process improvements Culture at Darwix AI At Darwix AI, we operate at the intersection of engineering velocity and product clarity. We move fast, prioritize outcomes over optics, and expect leaders to drive hands-on impact. You will work directly with the founding team and senior leaders across engineering, product, and GTM functions. Expect ownership, direct communication, and a culture that values builders who scale systems, people, and strategy. Compensation and Benefits Competitive fixed compensation Performance-based bonuses and growth-linked incentives ESOP eligibility for leadership candidates Access to GPU/compute credits and model experimentation infrastructure Comprehensive medical insurance and wellness programs Dedicated learning and development budget for technical and leadership upskilling MacBook Pro, premium workstation, and access to industry tooling licenses Career Progression 12-month roadmap: Build and stabilize AI platform across all product lines 18–24-month horizon: Elevate to VP of AI or Chief AI Officer as platform scale increases globally Future leadership role in enabling new verticals (e.g., healthcare, finance, logistics) with domain-specific GenAI solutions How to Apply Send the following to careers@darwix.ai : Updated CV (PDF format) A short statement (200 words max) on: “How would you design a multilingual voice-to-text pipeline optimized for low-resource Indic languages, with real-time nudge delivery?” Links to any relevant GitHub repos, publications, or deployed projects (optional) Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” Show more Show less
Posted 3 weeks ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description It’s an exciting time to be at Infoblox. Named a Top 25 Cyber Security Company by The Software Report and one of Inc . magazine’s Best Workplaces for 2020, Infoblox is the leader in cloud-first networking and security services. Our solutions empower organizations to take full advantage of the cloud to deliver network experiences that are inherently simple, scalable, and reliable for everyone. Infoblox customers are among the largest enterprises in the world and include 70% of the Fortune 500, and our success depends on bright, energetic, talented people who share a passion for building the next generation of networking technologies—and having fun along the way. We are looking for a Staff Data Science Engineer to join our IT Applications and Data team located in Bangalore, reporting to the senior director of Business Intelligence. In this role, you will collaborate with various business functions and IT Development teams to deliver data science and analytics capabilities for the company. You will work with the IT development and product manager to discover the information hidden in data and help us provide insights and make smarter decisions using that data. This is an exceptional opportunity to join a growing, successful, and innovative organization. Infoblox allows you to thrive in a unique work environment that emphasizes career growth, excellence, innovation, and collaboration. You are the ideal candidate if you can influence, collaborate, and break down complex business information into easy-to-understand data products. What you’ll do: Maintain an in-depth understanding of business needs by working closely with stakeholders and understand how data can be turned into information and knowledge Apply data mining techniques, perform statistical analysis, and build high-quality prediction solutions Collect and prepare data from the datalake and other sources and build machine learning (ML) techniques and analytic capabilities Work with IT management and senior engineers to share model outputs through business intelligence (BI) dashboards Complete ad hoc analysis and present results to business users Develop Generative AI (Gen AI) models for use cases such as text generation, summarization, and conversational AI Implement Large Language Models (LLMs) using Python-based frameworks such as LangChain, Hugging Face Transformers, OpenAI APIs, and TensorFlow/Keras Fine-tune and optimize Gen AI models for domain-specific applications Work with Microsoft Fabric for data engineering, real-time analytics, and AI integration Implement Retrieval-Augmented Generation (RAG) techniques to enhance LLM responses using enterprise knowledge bases What you’ll bring: 12+ years of data science experience that includes thought leadership and a sound understanding of business objectives and processes in SaaS companies preferred Solid programming and scripting skills in Python, with experience in PyTorch, TensorFlow, Scikit-learn, and MLflow Hands-on experience with ML algorithms and data science toolkits, such as R, Numpy, and MatLab Practical experience with Generative AI models, such as GPT, BERT, T5, or Stable Diffusion, and LLMs Experience in model deployment and serving using tools like FastAPI, Flask, MLflow, or Vertex AI Understanding of Microsoft Fabric and its ML and AI capabilities Knowledge of RAG for enhancing LLM applications with external data retrieval Excellent scripting and programming skills, including SQL and data extraction tools Sound applied statistical skills, such as statistical testing and regression Fair understanding of BI technologies, such as Tableau and big data analytics Ability to work with managers to use storytelling to showcase the business value of the data science deliveries to executives Bachelor’s degree in computer science, engineering, or a related field; advanced degree preferred What success looks like: After six months, you will… Be the conduit to bring out analytical insights for revenue growth and churn reduction Use your strong cross-functional business knowledge to create data science solutions After About a Year, You Will… Be a critical part of delivering AI possibilities leveraging ML Be a key contributor in elevating the analytic opportunities for the company using IT-managed applications’ data Have a solid relationship with stakeholders We’ve got you covered: Our holistic benefits package includes coverage of your health, wealth, and wellness—as well as a great work environment, employee programs, and company culture. We offer a competitive salary and benefits package, and generous paid time off to help you balance your life. We have a strong culture and live our values every day—we believe in transparency, curiosity, respect, and above all, having fun while delighting our customers. Speaking of a great work environment, here are just a few of the perks you may enjoy, depending on your location Delicious and healthy snacks and beverages Electric vehicle charging stations Outdoor amenities (onsite gym, table tennis, pool table, play area, and courtyard Newly remodeled offices with state-of-the-art amenities Why Infoblox? We’ve created a culture that embraces diversity, equity, and inclusion and rewards innovation, curiosity, and creativity. We achieve remarkable results by working together in a supportive environment that focuses on continuous learning and embraces change. So, whether you’re a software engineer, marketing manager, customer care pro, or product specialist, you belong here, where you will have the opportunity to grow and develop your career. Check out what it’s like to be a Bloxer . We think you’ll be excited to join our team. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Basic Function Claims Litigation Support Services to US P&C Insurance Carrier Providing following Claims Litigation Support Services – Open/Close/Transfer cases Order Medical/Other records and follow-up Record Chronology and Summarization Litigation/Subpoena document review including redactions and privilege logs Interrogatories responses Legal correspondence for Attorney/Paralegal use during litigation Document management Claim File transfer Legal bill review Scheduling, Calendar management and Deposition scheduling/summarization Common Function Achieve individual productivity and quality goals Participate in training initiatives to develop knowledge Integrate procedural changes into daily routine Support other team members in meeting service expectations Adhere to Company policies and procedures Technical Skills Good understanding of law and legal concepts Strong legal analytical skills Excellent legal writing skills Process Specific Skills Exposure to litigation process and legal documentation Awareness of processes like e-discovery & document review Ability to effectively work using desktop computer system, especially Microsoft Office with prior experience of working on client’s interface/tool Soft Skills (Desired) Proficient in legal knowledge and its application Eye for detail Good understanding of US legal system Good understanding of US law and legal concepts Good understanding of US litigation (including e-discovery) and legal documents Excellent English communication skills – written and spoken Good knowledge of MS Word, Excel, and good keyboarding speed Basic knowledge of using the internet, web browsers, and search engines Proficient in working independently Soft Skills (Minimum) Basic understanding of law Analytical and mathematical mind set Proficient with MS Excel Good English communication skills Spirit of collaboration and team work Trainability Ability to work independently Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Senior AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Senior AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-4 years of experience in building and deploying AI/ML systems, with at least 1-2 years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Senior AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Join the Virtual Drive At TCS 28 may 2025 for skill Contact Center Configuration analyst---- Contact Center Technical SME for PAN INDIA location exp req:5+ yrs job Location:PAN INDIA JD Key Responsibilities - Serve as the technical expert for contact center platforms (e.g., Genesys, Avaya, Cisco, Amazon Connect, NICE CXone, five9). - Configure and maintain IVRs, call flows, routing strategies, agent profiles, and AI-powered virtual agents. - Collaborate with business and AI teams to design and implement Conversational AI solutions using platforms like Google Dialogflow, Amazon Lex, or Microsoft Bot Framework - Participate in RFP and proactive deals to provide solutions across contact center - Integrate Generative AI capabilities (e.g., OpenAI, Azure OpenAI, Google Vertex AI) for use cases such as agent assist, auto-summarization, sentiment analysis, and knowledge base generation. - Support the deployment and tuning of AI models for real-time and post-call analytics. - Ensure seamless integration between AI tools, CRM systems, and contact center platforms. - Monitor and optimize AI performance, ensuring high accuracy and customer satisfaction. - Maintain documentation for AI workflows, configurations, and system changes. - Stay current with emerging AI trends and recommend innovative solutions. Show more Show less
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you’d make an outstanding addition to our vibrant team. Siemens Mobility is an independent run company of Siemens AG. Its core business includes rail vehicles, rail automation and electrification solutions, turnkey systems, intelligent road traffic technology and related services. In Mobility, we help our customers meet the need for hard-working mobility solutions. We’re making the lives of people who travel easier and more enjoyable while constantly developing new, intelligent mobility solutions! We are looking for Deputy Commercial Project Manager You’ll make a difference by Responsible for summarized global reporting to the relevant Management level. Documentation in line with the mandatory internal and external requirements. Managing commercial and legal project subjects. Collaboration with the Project Manager in terms of the project's strategic orientation and its respective internal and external communication. Adherence to fiscal, commercial-law and company-internal commercial rules. Commercial project coordination (e.g. application for and follow-up of bank guarantees, insurances, etc.) as well as coordination of legal, fiscal and insurance subjects. Order entry calculation, concurrent costing and final costing. Asset and cash flow management. Correct allocation and monitoring of costs. Preparation of invoices and follow-up of claims. Project-internal controlling (deadlines, costs, quality). Regular project reporting / project status meetings / milestone reviews. Involves in the creation of final project reports and summarization of lessons learned with feedback to the organization. Overall handling of fiscal, currency-related aspects and insurance subjects, involving the responsible department and taking into consideration internal business models. Steers the project's supply chain incl. procurement, delivery and ECC. Wording of the commercial and legal contractual conditions with customers, consortium members and subcontractors. Involves in negotiations as well as interpretation and implementation of contracts in projects. Analysis and assessment of complex, where applicable international contract constellations. Active contract management together with the Project Manager for risks and opportunities. Claim and change order management. Assertion of own claims and prevention of unjustified claims. Agreement on contractual amendments about the scope of supplies and services, prices, deadlines or other contractually stipulated conditions. Identification and financial assessment of opportunities and risks, definition and implementation of suitable measures to reduce risks or realize opportunities, and creation of contingencies for remaining risks, active risk and opportunity management. Desired Skills: You should have minimum experience of 5-8 years with bachelor’s degree in commerce or finance or accounting along with CA/CS/ICWA with basic understanding of Project Management Have SAP and advanced level of Excel skills will have added advantage. Have good communication skill to take care of different customers within/outside organization. Join us and be yourself! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Pune. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us shape tomorrow. Find out more about mobility at: https://new.siemens.com/global/en/products/mobility.html and about Siemens careers at: www.siemens.com/careers Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Experience Required: 1–3 Years Department: Artificial Intelligence Location: Mumbai - Hybrid About the role We are seeking a passionate and dynamic AI/ML Engineer with 1–3 years of experience , particularly in Generative AI , to join our growing Artificial Intelligence team. In this role, you will collaborate with cross-functional teams to develop innovative AI-driven solutions, leveraging the latest in LLMs, RAG pipelines, and cloud technologies. You will work extensively with frameworks like LangChain, RAGAS, and Hugging Face , and implement solutions using Microsoft Azure services . A deep understanding of transformers, LLMs (such as GPT, LLaMA), and cloud integration is highly desirable. Education Bachelor’s degree in Computer Science , Data Science , Artificial Intelligence , or a related field Certifications in AI/ML or Cloud Technologies (especially Microsoft Azure) are highly desirable Key Responsibilities GenAI Product Development: Build RAG-based AI assistants using LLMs (e.g., GPT, LLaMA) for use cases like chatbots, knowledge management, and personalized user experiences. Prompt Engineering: Design, test, and optimize prompts to control and fine-tune the output behavior of LLMs for different business needs. Generative AI Solutions: Develop AI models for text generation, summarization, and other GenAI applications using tools such as LangChain, Hugging Face, and RAGAS. RAG Implementation: Build and optimize Retrieval-Augmented Generation (RAG) pipelines integrating vector databases and LLMs for enhanced performance and accuracy. Cloud Integration Deploy AI/ML models using Azure services (e.g., Azure ML, Cognitive Services) and utilize cloud-native tools for scaling and monitoring. Collaboration & Knowledge Sharing Work with cross-functional teams to identify business problems and deliver AI-based solutions. Contribute to internal learning by sharing insights on latest research and tools. Required Skills Solid understanding of machine learning and deep learning concepts Hands-on experience with LLMs (GPT, LLaMA) and transformers (e.g., BERT) Strong grasp of LLM API cost, latency, and performance factors Experience with LangChain , Hugging Face , and RAGAS Expertise in designing RAG-based workflows Proficient in Python and libraries like TensorFlow , PyTorch , Pandas Familiarity with frameworks like Django , Flask , Streamlit Hands-on experience with Azure AI/ML services (Azure ML, Cognitive Services) Basic understanding of cloud computing , Docker , and CI/CD pipelines Experience using Microsoft services for chatbot development Strong analytical and problem-solving abilities Excellent communication skills to interact with both technical and non-technical stakeholders Detail-oriented with a growth mindset and willingness to learn Good to have Experience with MLOps tools and practices Understanding of ethical AI and governance frameworks Certifications in AI, ML, or cloud platforms (e.g., Azure AI Engineer) Familiarity with vector databases such as FAISS , Pinecone , etc. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Greetings from TCS!!! TCS is looking for Contact Center Technical SME Job Title - Contact Center Technical SME Experience - 8+ years Location - PAN India Role Overview We are looking for a Contact Center Technical SME with deep expertise in contact center configuration and a strong understanding of AI-driven customer engagement technologies. This role will be instrumental in configuring and optimizing contact center platforms while integrating Virtual Agents, Conversational AI, and Generative AI solutions to enhance customer experience and operational efficiency. Key Responsibilities Serve as the technical expert for contact center platforms (e.g., Genesys, Avaya, Cisco, Amazon Connect, NICE CXone, five9). Configure and maintain IVRs, call flows, routing strategies, agent profiles, and AI-powered virtual agents. Collaborate with business and AI teams to design and implement Conversational AI solutions using platforms like Google Dialogflow, Amazon Lex, or Microsoft Bot Framework Participate in RFP and proactive deals to provide solutions across contact center Integrate Generative AI capabilities (e.g., OpenAI, Azure OpenAI, Google Vertex AI) for use cases such as agent assist, auto-summarization, sentiment analysis, and knowledge base generation. Support the deployment and tuning of AI models for real-time and post-call analytics. Ensure seamless integration between AI tools, CRM systems, and contact center platforms. Monitor and optimize AI performance, ensuring high accuracy and customer satisfaction. Maintain documentation for AI workflows, configurations, and system changes. Stay current with emerging AI trends and recommend innovative solutions. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience). 8+ years of experience in contact center technology with a focus on configuration and system administration. Hands-on experience with contact center platforms and AI tools. Experience implementing Virtual Agents and Conversational AI solutions. Familiarity with Generative AI APIs and frameworks (e.g., OpenAI, Azure OpenAI, Google Cloud AI). Strong understanding of call routing, IVR design, ACD, CTI, and WFM systems. Proficiency in scripting or configuration languages used in contact center platforms. Excellent analytical, problem-solving, and communication skills. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Work From Office only- Jaipur, Rajasthan Must have experience: 4 year+ Should be strongly skilled in FastAPI, RAG, LLM, Generative AI About the Role: We are seeking a hands-on and experienced Data Scientist with deep expertise in Generative AI to join our AI/ML team. You will be instrumental in building and deploying machine learning solutions, especially GenAI-powered applications. Key Responsibilities: - Design, develop, and deploy scalable ML and GenAI solutions using LLMs, RAG pipelines, and advanced NLP techniques. - Implement GenAI use cases involving embeddings, summarization, semantic search, and prompt engineering. - Fine-tune and serve LLMs using frameworks like vLLM, LoRA, and QLoRA; deploy on cloud and on-premise environments. - Build inference APIs using FastAPI and orchestrate them into robust services. - Utilize tools and frameworks such as LangChain, LlamaIndex, ONNX, Hugging Face, and VectorDBs (Qdrant, FAISS). - Collaborate closely with engineering and business teams to translate use cases into deployed solutions. - Guide junior team members, provide architectural insights, and ensure best practices in MLOps and model lifecycle. - Stay updated on latest research and developments in GenAI, LLMs, and NLP. Required Skills and Experience: - 4-8 years of hands-on experience in Data Science/Machine Learning, with a strong focus on NLP and Generative AI. - Proven experience with LLMs (LLaMA 1/2/3, Mistral, FLAN T5) and concepts like RAG, fine-tuning, embeddings, chunking, reranking, and prompt optimization. - Experience with LLM APIs (OpenAI, Hugging Face) and open-source model deployment. - Proficiency in LangChain, LlamaIndex, and FastAPI. - Understanding of cloud platforms (AWS/GCP) and certification in a cloud technology is preferred. - Familiarity with MLOps tools and practices for CI/CD, monitoring, and retraining of ML models. - Ability to read and interpret ML research papers and LLM architecture diagrams. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Serve as Product Owner Salesforce Email (Insight Connect) and Agent Desktop (Salesforce Insight) that supports MetLife Global Customer Service and Operations. Agility to keep himself updated with relevant technologies and can implement the same to decrease costs, increase performance and positively affect the bottom line . Responsible for understanding customer requirements, designing and routing strategies, and integrating call flows. Deep understanding and experience with Email and Case Management, Metrics for Case Management Build, integrate, and enhance product roadmap by prioritizing epics, features and stories in the backlog that execute the strategic vision. Experience with both Salesforce Classic and Lightning preferred Deep understanding of Salesforce CoPilot capabilities Deep understanding and experience with Artificial Intelligence (AI) in terms of customer and Contact Center associate experience (e.g. knowledge of co-pilot, Next Best Action, automation driven based upon intent, knowledge driven based upon intent, call summarization, post call wrap/import speech analytics for post call wrap , tracking utilization of Co-Pilot) Experience building out and executing Artificial Intelligence (AI) strategies that translates to achieving KPIs (e.g. AHT, etc.) and other OKRs that are important to stakeholders. Draw insights and present results clearly to facilitate sound decision-making on next steps. Engage business owners and leaders in conversations about their business strategies and recommend Artificial Intelligence (AI) technology solutions to support those strategies. Monitor development of product stories during sprints and iterations, escalate issues and remove blockers. Establish plan for roll-out / release of new features and functionality and define, measure and report on product analytics, performance, and success. Partner with Application Development to lead discovery on new concepts, including determining and refining business value, and customer value based upon technical feasibility. Take part in cross functional system demos, review UAT, ensure post-production check out is completed and monitor/prioritize post-production issues. Build and manage relationships across several stakeholders, including Contact Center LOBs, Digital, Application Development, Risk/Security, etc. Ability to put structure around complex projects that maybe undefined or fluid in nature. Subject matter expert in Salesforce profiles, roles and permissions Subject matter expert in Salesforce reporting About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AI Engineer Location: Gurgaon (On-site) Type: Full-Time Experience: 2–6 Years Role Overview We are seeking a hands-on AI Engineer to architect and deploy production-grade AI systems that power our real-time voice intelligence suite. You will lead AI model development, optimize low-latency inference pipelines, and integrate GenAI, ASR, and RAG systems into scalable platforms. This role combines deep technical expertise with team leadership and a strong product mindset. Key Responsibilities Build and deploy ASR models (e.g., Whisper, Wav2Vec2.0) and diarization systems for multi-lingual, real-time environments. Design and optimize GenAI pipelines using OpenAI, Gemini, LLaMA, and RAG frameworks (LangChain, LlamaIndex). Architect and implement vector database systems (FAISS, Pinecone, Weaviate) for knowledge retrieval and indexing. Fine-tune LLMs using SFT, LoRA, RLHF, and craft effective prompt strategies for summarization and recommendation tasks. Lead AI engineering team members and collaborate cross-functionally to ship robust, high-performance systems at scale. Preferred Qualification 2–6 years of experience in AI/ML, with demonstrated deployment of NLP, GenAI, or STT models in production. Proficiency in Python, PyTorch/TensorFlow, and real-time architectures (WebSockets, Kafka). Strong grasp of transformer models, MLOps, and low-latency pipeline optimization. Bachelor’s/Master’s in CS, AI/ML, or related field from a reputed institution (IITs, BITS, IIITs, or equivalent). What We Offer Compensation: Competitive salary + equity + performance bonuses Ownership: Lead impactful AI modules across voice, NLP, and GenAI Growth: Work with top-tier mentors, advanced compute resources, and real-world scaling challenges Culture: High-trust, high-speed, outcome-driven startup environment Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
India
Remote
About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as an AI Prompt Enginee r and play a pivotal role in shaping how startups leverage LLMs for real-world use cases. You’ll design, test, and optimize prompts for tasks like summarization, chatbots, content generation, code generation, document processing, and more—working with top models like GPT- 4, Claud e, Gemin i, and open-source LLM s .Work from anywhere, take on projects that match your strengths, and collaborate with innovative teams building the next wave of AI-powered products . Role Overview As an AI Prompt Engineer, you will: Craft Effective Prompt s: Design structured prompts for a range of tasks, ensuring accuracy, relevance, and safety of responses Test & Optimize Output s: Iteratively refine prompts based on performance metrics, user feedback, and edge case behavior Collaborate Cross-Functionall y: Work with developers, product managers, and designers to integrate LLMs into user-facing applications What You’ll Do Prompt Engineering & Evaluation Create, test, and tune prompts for tasks like Q&A, summarization, creative writing, coding assistance, and RAG system Evaluate outputs across edge cases to ensure quality, consistency, and alignment Design few-shot or zero-shot examples to improve model behavior across contexts Integration & Collaboration Collaborate with developers to integrate prompts into backend systems or APIs.Support the design of agent behaviors, tools, and memory strategies through effective prompt design Contribute to prompt libraries and documentation for reusability and scaling Technical Requirements & Skills Exper ience: 1–3+ years working with LLMs, NLP, or appli ed AI.LLM Expe rtise: Familiarity with GPT-4, Claude, Gemini, Mistral, or other leading m odels Prompting Tools: Experience with LangChain, OpenAI Playground, Anthropic Console, or similar tools Bonus: Understanding of few-shot/fine-tuned approaches, embeddings, and evaluation techniques Communic ation: Strong writing, analytical, and problem-solving skills to explain and iterate on prompt logic What We’re Looking for A language- and logic-savvy thinker who understands how LLMs interpret inputs A detail-oriented creator who enjoys refining outputs and iterating toward clarity A freelancer who’s excited to shape the future of human-AI interaction through better instructions Why Join Us? Immediate Impact: Work on projects where prompt quality directly drives user experience Remote & F lexible: Take on hourly or project-based work from anywhere Future Opport unities: Stay matched with prompt-heavy roles across chatbots, agents, and AI content Creative A utonomy: Help startups unlock the full potential of LLMs through precision prompting Ready to prompt the future Apply now to beco me a key AI Prompt Engineer for our client and a valued member of the BeGig network! Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organisations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Assistant Vice President – Global Risk Analytics Principal Responsibilities Apply innovative thinking and analytical techniques including data mining / modeling techniques as applicable to build regulatory models (Basel pillar 1 (internal ratings-based) IRB models) and (International Financial Reporting Standard )IFRS9 model for HSBC UK corporate portfolios. Coordination with internal (within Global Risk Analytics GRA) and external (front line business, regulatory oversight teams, etc.) stakeholders to understand the requirement of the project and delivery and stick to path of regulatory compliance. Explore new data sources / methods to improve model performance. Identify areas which require data remediation and proactively engage with internal / external stakeholder to address them. Clear documentation / summarization of the analytical output. Overall model risk management. Requirements 6+ years of experience in banking domain / regulatory risk analytics/ capital management. To be successful in this role you should ideally meet the following : Masters in any numeric discipline ,Stats/Maths , Engineering (or B-tech with relevance experience), Economics, MS / MBA in Finance Hands-on experience on at least one regulatory model development in python environment would be preferred. Prior experience in risk management, regulatory risk model development, model risk management would be preferred. Understanding of commercial banking and wholesale related products would be preferred. Highly focused on project delivery, attention to detail Excellent written and verbal communication skills Hands-on statistical knowledge with scorecard /econometric model development capability Strong collaborative, influencing and team building skills Strong analytical and problem solving skills, open minded, flexible, pragmatic Strong documentation and summarizing skills for senior audiences Strong programming skills in Python/SAS and experience of working with large volume of data Willing to work with colleagues in other areas/timezones where appropriate You’ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.” Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued By HSBC Electronic Data Processing (India) Private LTD*** Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role – AIML Data Scientist Location : Chennai Mode of Interview - In Person Date : 7th June 2025 (Saturday) Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Primary Skillset: Generative AI Services, Machine Learning, Python, Cloud (GCP preferred), MLOps, Prompt Engineering About the Role: We are seeking a visionary and hands-on Lead Software Engineer to drive Generative AI development and exploration initiatives within our technology innovation team. This role is ideal for someone passionate about applying GenAI to real-world enterprise challenges and who enjoys building proof-of-concepts, production-ready solutions, and scalable AI platforms. Key Responsibilities: Lead the design and development of Generative AI applications, including LLM-based solutions, document intelligence, summarization, chatbot development, and intelligent automation Evaluate and integrate GenAI services and APIs (e.g., Vertex AI, OpenAI, Anthropic, HuggingFace) into existing or new platforms Build scalable backend services and data pipelines to support AI/ML workloads Explore and validate use cases across multiple business functions for GenAI application and automation opportunities Mentor junior engineers on GenAI design patterns, best practices, and engineering discipline Partner with data scientists, product teams, and business stakeholders to define requirements, build MVPs, and iterate toward enterprise-scale solutions Champion MLOps practices for GenAI model deployment, monitoring, and lifecycle management Contribute to technical research, architecture recommendations, and the long-term GenAI roadmap Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field 8+ years of experience in software engineering with 2–3 years working directly with AI/ML models or GenAI solutions Strong experience with Python, cloud platforms (especially GCP Vertex AI), and ML frameworks Hands-on experience with LLMs, prompt engineering, fine-tuning, and embeddings Experience in building and integrating GenAI services in real-time or batch applications Solid understanding of machine learning lifecycle, data preprocessing, and deployment pipelines Familiarity with CI/CD pipelines, API development, and cloud-native architectures Excellent communication, collaboration, and stakeholder management skills Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2