AI Research Engineer – Speech/ASR

12 years

0 Lacs

Posted:3 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

12+ years

voice-first healthcare AI platform

  • Multilingual

    Speech-to-Text (STT)

    engine
  • Healthcare-fine-tuned

    LLMs

  • Large, curated

    clinical dataset

    collected over 14 years
  • AI tools for clinical documentation, voice search, triage, patient interaction & more


Role: Research Engineer – Speech/ASR + LLM


1–2 years of hands-on research experience

  • Speech recognition (ASR)
  • Whisper/Wav2Vec/XLS-R
  • Audio processing
  • LLM model training / fine-tuning
  • NLP model development

OR experience from:

  • Research labs (IIT, IISc, BITS, etc.)
  • AI startups
  • ML research internships
  • Academic thesis projects that resulted in real ASR/LLM outputs


Responsibilities



🗣️ Speech / ASR

  • Train & fine-tune multilingual ASR models
  • (Whisper, NeMo Conformer, Wav2Vec2, XLS-R)
  • Build & evaluate STT models for Indian languages
  • Improve accuracy for code-mixing (Hinglish/Tanglish/etc.)
  • Handle real-world noisy OPD/clinic audio
  • Implement chunked streaming inference
  • Work with VAD, simple diarization, and segmentation
  • Build WER/CER evaluation pipelines

🧠 LLM / NLP

  • Fine-tune LLMs (Llama, Mistral, Gemma) on clinical datasets
  • Build models for:
  • Clinical summarization
  • Entity extraction (symptoms, diagnosis, plan)
  • Multilingual documentation
  • Voice command agents
  • Build training datasets from OPD transcripts
  • Optimize inference (vLLM/TGI)

🏗️ Research + Engineering

  • Read & implement research papers
  • Prototype new architectures
  • Run experiments at scale
  • Collaborate with Speech + LLM teams
  • Prepare reports, benchmarks & internal publications


Required Skills

Must-Have

  • 1–2 years hands-on experience in ASR or LLMs
  • Strong Python & PyTorch
  • Experience fine-tuning Whisper / Wav2Vec / XLS-R OR LLMs
  • Experience with GPU training
  • Good understanding of speech features (MFCC, spectrograms)
  • Experience using HuggingFace or NeMo
  • Strong debugging skills & experiment tracking

Nice-to-Have

  • Experience with Indian languages
  • VAD, diarization (basic)
  • Triton/vLLM
  • Distributed training basics
  • DSP fundamentals
  • Research publications or internship at ML labs


How to Apply

Send your resume + GitHub + projects to:

bharat@neuralbits.com


“Research Engineer – Speech/LLM – Your Name”

Include any:

  • ASR demos
  • Whisper/Wav2Vec fine-tuned models
  • LLM fine-tuning projects
  • Research papers / thesis / experiments


Job Location : Mumbai

Salary : very competitive

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Neuralbits Technologies logo
Neuralbits Technologies

Technology, Artificial Intelligence

Silicon Valley

RecommendedJobs for You