Jobs
Interviews

1 Gcpaws Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an engineer in this role, you will be responsible for building and optimizing high-throughput, low-latency LLM inference infrastructure. This will involve using open-source models such as Qwen, LLaMA, and Mixtral on multi-GPU systems like A100/H100. Your main areas of focus will include performance tuning, model hosting, routing logic, speculative decoding, and cost-efficiency tooling. To excel in this position, you must have deep experience with vLLM, tensor/pipe parallelism, and KV cache management. A strong understanding of CUDA-level inference bottlenecks, FlashAttention2, and quantization is essential. Additionally, familiarity with FP8, INT4, and speculative decoding (e.g., TwinPilo...

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies