Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an engineer in this role, you will be responsible for building and optimizing high-throughput, low-latency LLM inference infrastructure. This will involve using open-source models such as Qwen, LLaMA, and Mixtral on multi-GPU systems like A100/H100. Your main areas of focus will include performance tuning, model hosting, routing logic, speculative decoding, and cost-efficiency tooling. To excel in this position, you must have deep experience with vLLM, tensor/pipe parallelism, and KV cache management. A strong understanding of CUDA-level inference bottlenecks, FlashAttention2, and quantization is essential. Additionally, familiarity with FP8, INT4, and speculative decoding (e.g., TwinPilo...
Posted 2 months ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
123151 Jobs | Dublin
Wipro
40198 Jobs | Bengaluru
EY
32154 Jobs | London
Accenture in India
29674 Jobs | Dublin 2
Uplers
24333 Jobs | Ahmedabad
Turing
22774 Jobs | San Francisco
IBM
19350 Jobs | Armonk
Amazon.com
18945 Jobs |
Accenture services Pvt Ltd
18931 Jobs |
Capgemini
18788 Jobs | Paris,France