Jobs
Interviews

1 Attentionbased Methods Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Principal Research Scientist focusing on AI Alignment at Ola Krutrim in Bangalore, India, you will lead the efforts in Trust and Safety, Interpretability, and Red Teaming within the AI division. Your role will be crucial in ensuring that the AI systems developed are safe, ethical, interpretable, and reliable, with a significant impact on millions of lives. You will be at the forefront of cutting-edge AI research, guiding the implementation of technologies that adhere to the highest standards of safety and transparency. Your responsibilities will include providing strategic leadership for the AI Alignment division, overseeing teams dedicated to Trust and Safety, Interpretability, and Red Teaming. You will work closely with the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist to align goals and methodologies. Developing comprehensive strategies for AI alignment, integrating advanced safety and interpretability techniques, and establishing best practices for red teaming exercises to identify vulnerabilities will be key aspects of your role. Moreover, you will collaborate with product and research teams to implement safety and interpretability aspects throughout the AI development lifecycle. Staying updated on AI ethics, safety, and interpretability research, representing the company in industry events, and managing resource allocation and strategic planning for the AI Alignment division are also part of your responsibilities. Mentoring and developing team members, fostering innovation, and communicating progress and recommendations to executive leadership will be essential in this role. To qualify for this position, you should hold a Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability. With at least 7 years of experience in AI research and development, including 3 years in a leadership role, you should have expertise in AI safety, interpretability, and red teaming methodologies. Strong knowledge of advanced techniques such as Reinforcement Learning, Proximal Policy Optimization, and attention-based methods, along with experience in overseeing red teaming exercises for AI systems, are required. Your visionary mindset, along with excellent communication skills, project management abilities, and a proven track record in AI safety, ethics, and interpretability research, will be instrumental in shaping the future of responsible AI development at Ola Krutrim. By leading cross-functional initiatives and fostering a culture of continuous learning and innovation, you will contribute to building public trust in AI technologies and positioning the company as a leader in ethical and responsible AI development.,

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies