Posted:5 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About Xenonstack

XenonStack is the fastest-growing

Data and AI Foundry for Agentic Systems

, enabling enterprises to gain

real-time and intelligent business insights

.

We Deliver Innovation Through

  • Agentic Systems for AI Agents → akira.ai
  • Vision AI Platform → xenonstack.ai
  • Inference AI Infrastructure for Agentic Systems → nexastack.ai
Our mission is to accelerate the world’s transition to

AI + Human Intelligence

by building systems that are

not only powerful but also safe, transparent, and accountable

.

THE OPPORTUNITY

We are seeking a

Responsible AI Engineer

to design and implement frameworks that ensure

fairness, safety, compliance, and transparency

in enterprise AI systems.This is a high-impact role at the intersection of

AI engineering, ethics, and governance

. You will work on ensuring that XenonStack’s

Agentic AI platforms

meet enterprise and regulatory standards, while remaining reliable and trustworthy in production.

Key Responsibilities

  • Responsible AI Frameworks
    • Develop and implement fairness, bias detection, and transparency frameworks for AI systems.
    • Define policies and guardrails for safe LLM interactions, multi-agent orchestration, and decision-making systems.
  • Evaluation & Monitoring
    • Design pipelines to monitor AI reliability, safety, and hallucination rates.
    • Build metrics to track bias, fairness, interpretability, and ethical compliance.
  • Governance & Compliance
    • Ensure compliance with GDPR, SOC2, HIPAA, ISO 27001, and other AI governance standards.
    • Collaborate with legal, compliance, and security teams to embed governance by design.
  • Transparency & Explainability
    • Implement XAI (Explainable AI) techniques to make models and agent decisions interpretable.
    • Provide tools and dashboards for auditing and reporting AI behavior.
  • Collaboration & Cross-Functional Work
    • Work with AI Engineers, AgentOps, and Data Architects to integrate responsible AI practices across the stack.
    • Partner with customers and regulators to communicate system reliability and ethical safeguards.
  • Continuous Improvement
    • Stay ahead of Responsible AI research, EU AI Act, NIST AI RMF, and global standards.
    • Feed evaluation results into continuous improvement loops for AI reliability and safety.

Skills & Qualifications

Must-Have

  • 3–6 years in AI/ML engineering, AI governance, or AI safety research.
  • Strong knowledge of AI fairness, interpretability, and Responsible AI frameworks.
  • Hands-on with evaluation tools (OpenAI Evals, Ragas, DeepEval, Fairlearn, Aequitas).
  • Proficiency in Python and data analysis frameworks.
  • Understanding of LLM architectures, RAG pipelines, and multi-agent workflows.
  • Familiarity with compliance and governance standards (GDPR, SOC2, ISO, HIPAA).

Good-to-Have

  • Exposure to reinforcement learning (RLHF, RLAIF, reward modeling).
  • Experience with Responsible AI monitoring in regulated industries (BFSI, Healthcare, GRC, SOC).
  • Knowledge of observability platforms (Weights & Biases, Arize AI, Prometheus, OpenTelemetry).
  • Contributions to Responsible AI open-source projects or research.

WHY SHOULD YOU JOIN US?

  • Agentic AI Product Company
Build

trustworthy AI systems

that enterprises can safely adopt.
  • A Fast-Growing Category Leader
Work at one of the fastest-growing

AI Foundries

, delivering Responsible AI at scale.
  • Career Mobility & Growth
Progress into roles like

AI Governance Lead, Responsible AI Architect, or Chief Trust Officer (CTO-track)

.
  • Global Exposure
Collaborate with

Fortune 500 enterprises and regulators

on Responsible AI adoption.
  • Create Real Impact
Ensure AI systems are

not just innovative, but ethical, explainable, and safe

.
  • Culture of Excellence
Our values —

Agency, Taste, Ownership, Mastery, Impatience, and Customer Obsession

— empower you to lead with purpose.
  • Responsible AI First
Join a company where

Responsible AI is a foundational principle, not an afterthought

.

XENONSTACK CULTURE – JOIN US & MAKE AN IMPACT!

At XenonStack, we believe in

shaping the future of intelligent systems

. We foster a

culture of cultivation

built on bold, human-centric leadership principles, where

deep work, simplicity, and adoption

define everything we do.

Our Cultural Values

  • Agency – Be self-directed and proactive.
  • Taste – Sweat the details and build with precision.
  • Ownership – Take responsibility for outcomes.
  • Mastery – Commit to continuous learning and growth.
  • Impatience – Move fast and embrace progress.
  • Customer Obsession – Always put the customer first.

Our Product Philosophy

  • Obsessed with Adoption – Making Responsible AI frameworks accessible and enterprise-ready.
  • Obsessed with Simplicity – Turning complex ethical challenges into seamless, auditable workflows.
Be part of our mission to

accelerate the world’s transition to AI + Human Intelligence

— by making

Responsible AI the backbone of enterprise adoption

.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

sahibzada ajit singh nagar, punjab, india