Home
Jobs

3 Responsible Ai Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 12 Lacs

Chennai, Perungudi

Work from Office

Naukri logo

Job Description Join our team as a Generative AI Architect in the AI Practice! We are looking for a visionary individual to lead the design, development, and deployment of cutting-edge Generative AI solutions, driving business value across our products and services. Key Responsibilities: - Lead generative AI initiatives in text, ASR, TTS & STT generation and develop AI pipelines and ML models. - Design scalable AI systems using state-of-the-art generative models like LLMs and diffusion models. - Collaborate with cross-functional teams to integrate generative AI into business workflows. - Define best practices for model lifecycle management, focusing on data preparation, training, evaluation, deployment, and monitoring. - Guide ethical AI development with a focus on fairness, interpretability, and compliance. Qualifications: - Bachelors or Masters degree in Computer Science or related field. - 7+ years of AI/ML development experience, with 2+ years in generative AI. - Hands-on experience with large language models and diffusion models. - Strong proficiency in Python and ML frameworks. - Experience with Azure, GCP, and containerization. - Familiarity with AI architecture patterns and model fine-tuning. - Knowledge of data privacy laws and AI ethics principles. Preferred: - Experience deploying GenAI applications in production. - Knowledge of vector databases and retrieval-augmented generation. - Experience with CI/CD pipelines for ML. Skills : - Generative AI,Large Language Models (LLMs),Diffusion Models,Machine Learning (ML),Artificial Intelligence (AI),Natural Language Processing (NLP),Python Programming,ML Frameworks (e.g., TensorFlow, PyTorch, Hugging Face),Azure,GCP,Ethical AI,Responsible AI,Vector Databases, RAG

Posted 2 weeks ago

Apply

5 - 10 years

20 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Job Description Do you want to lead teams that find and exploit security vulnerabilities in Fortune 100 companies, critical infrastructure, and public sector agencies impacting millions of users? Join Securins Offensive Security Team where you'll emulate real-world attacks and oversee advanced offensive operations. We are a cross-disciplinary group of red teamers, adversarial AI researchers, and software developers dedicated to finding and fixing vulnerabilities across critical digital ecosystems. Role & responsibilities - Lead and perform advanced offensive security assessments, including Red Team operations, threat-based evaluations, and vulnerability exploitation. - Supervise and mentor a team of offensive engineers, manage task prioritization, and ensure high-quality delivery. - Execute Red Team operations on production systems, including AI platforms, using real-world adversarial tactics. - Provide strategic and technical security guidance to internal and external stakeholders. - Collaborate cross-functionally to integrate findings into enterprise detection and defense strategies. - Research and develop adversary TTPs across the full attack lifecycle. - Build tools to automate and scale offensive emulation and vulnerability discovery, utilizing AI/ML systems. - Continuously evaluate and enhance assessment methodologies and frameworks used by the team. - Contribute to the security community through publications, presentations, bug bounties, and open-source projects. Required Qualifications - 5+ years of experience in offensive security, red teaming, or penetration testing with at least 1 year in a leadership role. - Bachelors or Masters degree in Computer Science, Computer Engineering, or relevant field; or equivalent experience. - Expert knowledge of offensive security tactics, threat modeling, APT emulation, and Red Team operations. - Strong understanding of MITRE ATT&CK framework and exploitation of common vulnerabilities. - Proficiency in one or more programming/scripting languages (Python, Go, PowerShell, C/C++, etc.). - Hands-on experience with penetration testing tools such as Metasploit, Burp Suite Pro, NMAP, Nessus, etc. - Familiarity with security in cloud environments (AWS, Azure, GCP) and across Windows/Linux/macOS platforms. - Ability to clearly articulate findings to technical and executive audiences and lead mitigation efforts. - Authorization to work in the country of employment at time of hire and ongoing during employment. Preferred Qualifications - Certifications like OSCP, OSCE, OSEP, CRTO, or equivalent. - Experience with Purple Team operations and threat intelligence integration. - Track record in CTF competitions or bug bounty programs. - Reverse engineering experience or malware analysis expertise. - Exposure to Responsible AI and adversarial machine learning. - Participation in AI Village at DEFCON or similar security research events. - Publications or contributions to conferences such as AISec, NeurIPS, FAccT, or IC4. Other Requirements Ability to meet Securin, customer, and/or government security screening requirements. This includes a background check at the time of hire/transfer and every two years thereafter. Who Should Apply You have experience executing technical research and offensive security strategies with teams. You are skilled in experimental security science and confident in building your own tools. You clearly communicate findings, are mission-driven, and want to drive change in AI and cybersecurity. Role-Specific Policy This hybrid role requires in-office presence at least 50% of the time. Locations: Chennai, Tamil Nadu (India)

Posted 1 month ago

Apply

5 - 10 years

0 - 3 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Seeking a highly skilled and experienced AI & Data Governance and Model Risk Management (MRM) Specialist to join our team. The ideal candidate will be responsible for overseeing the governance of large language models (LLMs) and other AI models, ensuring compliance with data privacy laws, other AI regulations, and managing risks associated with the deployment of AI technologies. This role requires a deep understanding of AI ethics, regulatory requirements, and the ability to work cross-functionally with various teams. Also, this role requires understanding of multi-cloud or hybrid cloud architectures for deployment of AI at scale. Key Responsibilities: Develop and implement governance frameworks for LLMs and AI models, ensuring they align with ethical standards and business objectives. Collaborate with data scientists, legal, and compliance teams to establish best practices for AI model development, deployment, and monitoring. Conduct risk assessments of AI models to identify potential biases, ethical concerns, and compliance issues with data privacy regulations and other AI regulations. Design and maintain Model Risk Management (MRM) policies and procedures, including model validation, performance tracking, and documentation standards. Provide guidance on the interpretation of data privacy laws and regulations, such as GDPR, CCPA, and other relevant frameworks, as they pertain to AI models. Strong working experience on AI Risk Management Frameworks (like NIST etc.) Facilitate training and awareness programs for staff on AI & Data governance, ethical AI use, and data privacy principles. Participate in industry forums and working groups to stay abreast of emerging trends, risks, and opportunities in AI governance and MRM. Work closely with the technology teams to implement controls and monitoring systems for AI models. Prepare reports and presentations for senior management and stakeholders on the status of AI governance and model risk management activities. Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Finance Management, or a related field. Minimum of 5 years of experience in AI & Data governance, model risk management, or a related area. Strong understanding of AI technologies, machine learning models, and their applications in a business context. Experience deploying AI applications on multi-cloud and hybrid cloud environments. Knowledge of data privacy laws and regulations, and experience in implementing data governance policies. Excellent analytical, problem-solving, and decision-making skills. Ability to communicate complex concepts to a non-technical audience. Proven track record of working in a cross-functional team environment. Professional certifications in AI ethics, data privacy, or risk management are a plus.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies