Job
Description
As a Principal Research Scientist focusing on AI Alignment at Ola Krutrim in Bangalore, India, you will lead the efforts in Trust and Safety, Interpretability, and Red Teaming within the AI division. Your role will be crucial in ensuring that the AI systems developed are safe, ethical, interpretable, and reliable, with a significant impact on millions of lives. You will be at the forefront of cutting-edge AI research, guiding the implementation of technologies that adhere to the highest standards of safety and transparency. Your responsibilities will include providing strategic leadership for the AI Alignment division, overseeing teams dedicated to Trust and Safety, Interpretability, and Red Teaming. You will work closely with the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist to align goals and methodologies. Developing comprehensive strategies for AI alignment, integrating advanced safety and interpretability techniques, and establishing best practices for red teaming exercises to identify vulnerabilities will be key aspects of your role. Moreover, you will collaborate with product and research teams to implement safety and interpretability aspects throughout the AI development lifecycle. Staying updated on AI ethics, safety, and interpretability research, representing the company in industry events, and managing resource allocation and strategic planning for the AI Alignment division are also part of your responsibilities. Mentoring and developing team members, fostering innovation, and communicating progress and recommendations to executive leadership will be essential in this role. To qualify for this position, you should hold a Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability. With at least 7 years of experience in AI research and development, including 3 years in a leadership role, you should have expertise in AI safety, interpretability, and red teaming methodologies. Strong knowledge of advanced techniques such as Reinforcement Learning, Proximal Policy Optimization, and attention-based methods, along with experience in overseeing red teaming exercises for AI systems, are required. Your visionary mindset, along with excellent communication skills, project management abilities, and a proven track record in AI safety, ethics, and interpretability research, will be instrumental in shaping the future of responsible AI development at Ola Krutrim. By leading cross-functional initiatives and fostering a culture of continuous learning and innovation, you will contribute to building public trust in AI technologies and positioning the company as a leader in ethical and responsible AI development.,