Posted:5 days ago|
Platform:
Remote
Contractual
They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society.
This role is part of a project supporting leading LLM companies. The primary objective is to help
these foundational LLM companies improve their Large Language Models.We support companies in enhancing their models by offering high-quality proprietary data. This data can be used as a basis for fine-tuning models or as an evaluation set to benchmark the performance. In an SFT data generation workflow, you might have to put together a prompt that contains code and questions, then elaborate model responses, and translate the provided CUDA/C++ code into equivalent Python code using PyTorch and NumPy to replicate the algorithm's behavior.For RLHF data generation, you may need to create a prompt or use one provided by the customer, ask the model questions, and evaluate the outputs generated by different versions of the LLM, comparing it and providing feedback, which is then used in fine-tune processes.
● Translate CUDA/C++ code into equivalent Python implementations using PyTorch and
NumPy, ensuring logical and performance parity.
● Analyze CUDA kernels and GPU-accelerated code for structure, efficiency, and
function before translation.
● Evaluate LLM-generated translations of CUDA/C++ code to Python, providing
technical feedback and corrections.
● Collaborate with prompt engineers and researchers to develop test prompts that reflect
real-world CUDA/PyTorch tasks.
● Participate in RLHF workflows, ranking LLM responses and justifying ranking decisions
clearly.
● Debug and review translated Python code for correctness, readability, and consistency
with industry standards.
● Maintain technical documentation to support reproducibility and code clarity.
● Propose enhancements to prompt structure or conversion approaches based on
common LLM failure patterns.
● 5+ years of overall work experience, with at least 3 years of relevant experience in
Python and 2+ years in CUDA/C++.
● Strong hands-on experience with Python, especially in scientific computing using
PyTorch and NumPy.
● Solid understanding of CUDA programming concepts and C++ fundamentals.
● Demonstrated ability to analyze CUDA kernels and accurately reproduce them in
Python.
● Familiarity with GPU computation, parallelism, and performance-aware coding
practices.
● Strong debugging skills and attention to numerical consistency when porting logic
across languages.
● Experience evaluating AI-generated code or participating in LLM tuning is a plus.
● Ability to communicate technical feedback clearly and constructively.
● Fluent in conversational and written English communication skills.
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now3.625 - 6.75 Lacs P.A.
1.8 - 3.6 Lacs P.A.
Mohali
6.0 - 9.0 Lacs P.A.
Salary: Not disclosed
Experience: Not specified
12.0 - 17.0 Lacs P.A.
Salary: Not disclosed
Sahibzada Ajit Singh Nagar, Punjab, India
Salary: Not disclosed
Experience: Not specified
1.44 - 3.6 Lacs P.A.
India
Experience: Not specified
Salary: Not disclosed
India
Salary: Not disclosed