Location Hyderabad
In this role, you will lead governance frameworks and drive adoption through communication, enablement, and active marketing campaigns including community building, hackathons, showcases, and enterprise social engagement to ensure safe, compliant, and high-impact use of AI across the organization
About the Role
Major accountabilities
- Governance Policy Define, socialize, and maintain AI governance policies, standards, and guardrails aligned with regulatory and internal controls (e.g., model risk, data privacy, security, responsible AI principles). Establish clear guidance for LLMs, RAG, prompt usage, data handling, and human-in-the-loop.
- Risk Management Operate and optimize AI risk assessment processes, model registers, and approval workflows; partner with Legal, Privacy, and Security to ensure compliance with applicable regulations (e.g., GDPR, HIPAA, GxP, 21 CFR Part 11, emerging AI acts) and enterprise policies.
- Compliance Monitoring Implement mechanisms for ongoing compliance monitoring, incident reporting, and periodic reviews; define KPIs and metrics for governance effectiveness and platform health.
- Enablement Communication Develop and execute an adoption strategy, including playbooks, guidelines, FAQs, office hours, and training for technical and non-technical audiences; convert complex governance concepts into accessible, action-oriented content.
- Community Building Marketing Expand platform footprint through internal campaigns, showcases, newsletters, Yammer/Teams community engagement, and internal roadshows; coordinate hackathons, challenges, and demo days to surface compliant, high-value use cases.
- Stakeholder Engagement Serve as a trusted liaison to business units and platform teams; align adoption initiatives with enterprise roadmaps; gather feedback and translate it into platform and governance improvements.
- Platform Intake Catalog Establish and curate a catalog of approved AI capabilities, patterns, and reusable assets; streamline intake, evaluation, and onboarding pathways to accelerate safe usage.
- Responsible AI Ethics Embed bias assessment, transparency, explainability, and human oversight into solution design and deployment processes; drive awareness of responsible AI practices across teams.
- Change Management Apply structured change approaches to support rollout of policies, tooling, and processes; measure adoption and iterate programs based on data-driven insights. Vendor Partner Alignment Collaborate with suppliers and platform vendors to ensure governance, telemetry, and communications align with enterprise standards and industry best practices.
- Bachelor s or master s degree in computer science, data science, information systems, communications, or a related field.
- Experience 8+ years in AI/ML, data, or digital programs with at least 4+ years focused on AI governance, risk, compliance, or platform adoption in pharma, biotech, or life sciences.
- Technical and Governance Expertise
- Understanding of Generative AI, LLMs, RAG, prompt management, content safety, and evaluation metrics. Familiarity with AI/ML platforms, MLOps practices, and cloud AI services (e.g., Azure OpenAI, AWS Bedrock, Google Vertex AI), and their governance implications.
- Knowledge of data privacy, security, model risk management, and regulated delivery in life sciences (GDPR, HIPAA, GxP, 21 CFR Part 11).
- MLOps DevOps Awareness Familiarity with CI/CD, model registries, monitoring, and audit logging concepts as they relate to governance and compliance. Program Management Demonstrated ability to plan and execute multi-workstream communication and adoption initiatives; strong analytical and problem-solving skills.
- Familiarity with regulatory requirements and industry standards in the pharmaceutical and life sciences sectors.
Benefits and Rewards
Read our handbook to learn about all the ways we ll help you thrive personally and professionally