Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an engineer in this role, you will be responsible for building and optimizing high-throughput, low-latency LLM inference infrastructure. This will involve using open-source models such as Qwen, LLaMA, and Mixtral on multi-GPU systems like A100/H100. Your main areas of focus will include performance tuning, model hosting, routing logic, speculative decoding, and cost-efficiency tooling. To excel in this position, you must have deep experience with vLLM, tensor/pipe parallelism, and KV cache management. A strong understanding of CUDA-level inference bottlenecks, FlashAttention2, and quantization is essential. Additionally, familiarity with FP8, INT4, and speculative decoding (e.g., TwinPilo...
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
75151 Jobs | Dublin
Wipro
28327 Jobs | Bengaluru
Accenture in India
23529 Jobs | Dublin 2
EY
21461 Jobs | London
Uplers
15523 Jobs | Ahmedabad
Bajaj Finserv
14612 Jobs |
IBM
14519 Jobs | Armonk
Amazon.com
13639 Jobs |
Kotak Life Insurance
13588 Jobs | Jaipur
Accenture services Pvt Ltd
13587 Jobs |