About the role We’re building a multi-tenant AI-governance platform on Microsoft Azure. You’ll design and implement secure, scalable services; integrate Azure ML Responsible AI tooling and Microsoft Purview; and enforce policy guardrails with Azure Policy. Responsibilities Design and build multi-tenant back-end services on Azure (App Service/AKS/Functions) behind Azure API Management. Implement tenant isolation, rate limits, and API versioning. Integrate Azure ML (model registry/endpoints) and Responsible AI Dashboard/Toolbox to generate fairness, error, and explainability reports. Implement Azure AI Content Safety checks in the prompt/response path. Connect datasets, prompts/outputs, and lineage to Microsoft Purview; build audit/reporting workflows (AI Foundry → Purview). Enforce Azure Policy baselines (networking, encryption, diagnostics), IAC for repeatable deployments. Build observability (App Insights/Log Analytics), SLOs, alerting, and secure secret management (Key Vault). Collaborate with Product and Compliance on governance features (access reviews, approvals, audit exports). Required skills 5+ years building cloud services (preferably Azure) with multi-tenant architectures. Hands-on with API Management, App Service or AKS, Functions, Key Vault, Azure Monitor/Log Analytics. Experience with Azure ML (endpoints/registry) and RAI tooling or equivalent ML governance stack. Proficiency in C#/.NET or Node.js/TypeScript or Python; SQL/Cosmos; CI/CD (GitHub Actions/Azure DevOps). Security/Compliance fundamentals: RBAC, Private Endpoints, encryption at rest/in transit; Azure Policy. Nice to have Microsoft Purview (catalog, lineage, access policies) and Azure AI Foundry experience. Content moderation integrations (Azure AI Content Safety). Experience in regulated environments (health/finance), data residency/sovereignty. Success in 90 days Ship a secure tenant onboarding flow (RBAC + data partitioning). Stand up APIM + first governance microservice with Content Safety checks. Register a sample model in Azure ML, attach Responsible AI reports, and stream lineage to Purview
You will be responsible for designing and implementing a multi-tenant AI-governance platform on Microsoft Azure. Your role will involve building secure and scalable services, integrating Azure ML Responsible AI tooling and Microsoft Purview, and enforcing policy guardrails with Azure Policy. Your responsibilities will include designing and building multi-tenant back-end services on Azure behind Azure API Management, implementing tenant isolation, rate limits, and API versioning. You will integrate Azure ML and Responsible AI Dashboard/Toolbox to generate fairness, error, and explainability reports. Additionally, you will implement Azure AI Content Safety checks in the prompt/response path and connect datasets, prompts/outputs, and lineage to Microsoft Purview. Furthermore, you will enforce Azure Policy baselines, implement Infrastructure as Code (IAC) for repeatable deployments, build observability using App Insights and Log Analytics, set up SLOs, alerting mechanisms, and manage secure secrets using Key Vault. Collaboration with Product and Compliance teams on governance features such as access reviews, approvals, and audit exports will also be part of your responsibilities. The required skills for this role include at least 5 years of experience in building cloud services, preferably on Azure, with multi-tenant architectures. You should be familiar with API Management, App Service or AKS, Functions, Key Vault, Azure Monitor, and Log Analytics. Experience with Azure ML (endpoints/registry) and Responsible AI tooling or equivalent ML governance stack is essential. Proficiency in programming languages such as C#/.NET, Node.js/TypeScript, or Python, along with knowledge of SQL/Cosmos and CI/CD tools like GitHub Actions or Azure DevOps, is required. Understanding security and compliance fundamentals like RBAC, Private Endpoints, encryption at rest/in transit, and Azure Policy is also necessary. Nice to have skills include experience with Microsoft Purview and Azure AI Foundry, content moderation integrations such as Azure AI Content Safety, and working in regulated environments like health or finance with knowledge of data residency and sovereignty. In your first 90 days, you are expected to ship a secure tenant onboarding flow with RBAC and data partitioning, set up Azure API Management with the first governance microservice implementing Content Safety checks, and register a sample model in Azure ML while attaching Responsible AI reports and streaming lineage to Purview.,
You will be working on building a multi-tenant AI-governance platform on Microsoft Azure. Your responsibilities will include designing and implementing secure, scalable services, integrating Azure ML Responsible AI tooling and Microsoft Purview, and enforcing policy guardrails with Azure Policy. Key Responsibilities: - Design and build multi-tenant back-end services on Azure (App Service/AKS/Functions) behind Azure API Management. Implement tenant isolation, rate limits, and API versioning. - Integrate Azure ML (model registry/endpoints) and Responsible AI Dashboard/Toolbox to generate fairness, error, and explainability reports. - Implement Azure AI Content Safety checks in the prompt/response path. - Connect datasets, prompts/outputs, and lineage to Microsoft Purview; build audit/reporting workflows (AI Foundry Purview). - Enforce Azure Policy baselines (networking, encryption, diagnostics), IAC for repeatable deployments. - Build observability (App Insights/Log Analytics), SLOs, alerting, and secure secret management (Key Vault). - Collaborate with Product and Compliance on governance features (access reviews, approvals, audit exports). Qualifications Required: - 5+ years building cloud services (preferably Azure) with multi-tenant architectures. - Hands-on experience with API Management, App Service or AKS, Functions, Key Vault, Azure Monitor/Log Analytics. - Experience with Azure ML (endpoints/registry) and RAI tooling or equivalent ML governance stack. - Proficiency in C#/.NET or Node.js/TypeScript or Python; SQL/Cosmos; CI/CD (GitHub Actions/Azure DevOps). - Security/Compliance fundamentals: RBAC, Private Endpoints, encryption at rest/in transit; Azure Policy. In addition to the above qualifications, it would be nice to have experience with Microsoft Purview (catalog, lineage, access policies) and Azure AI Foundry. Content moderation integrations (Azure AI Content Safety) and experience in regulated environments (health/finance), data residency/sovereignty are also considered beneficial. Please note that the role also involves collaborating with Product and Compliance teams on governance features such as access reviews, approvals, and audit exports.,