Who We Are
Sirion is the world’s leading AI-native Contract Lifecycle Management (CLM) platform, transforming the end-to-end contracting journey for global enterprises. With Agentic AI at the core, Sirion’s extraction, conversational search, and AI-enhanced negotiation capabilities are redefining how Fortune 500 companies like IBM, Coca-Cola, Citi, and GE manage contracts. With 800+ employees worldwide—AI engineers, legal experts, and researchers—we are continuously innovating to build the most reliable and trustworthy CLM for the enterprises of tomorrow. Sirion is consistently recognized by Gartner, IDC, and Spend Matters as a category leader in CLM innovation.
www.sirion.ai
Power the Future of AI & Why This Role Matters
MLOps Engineer
machine learning, cloud infrastructure, and platform engineering
How You’ll Make an Impact
- Build, automate, and maintain
end-to-end MLOps pipelines
, including data ingestion, preprocessing, model training, validation, deployment, and inference. - Design, develop, and operate
CI/CD workflows for machine learning
, supporting model versioning, artifact management, lineage tracking, and automated rollback strategies. - Create and maintain
internal MLOps platforms and self-service tools
that enable data scientists and ML engineers to deploy models with minimal operational overhead. - Deploy, manage, and optimize
ML and LLM inference services
in production, including GPU-accelerated workloads. - Establish comprehensive
monitoring, alerting, and observability
for model performance, data drift, concept drift, explainability, and infrastructure health. - Define and enforce
ML governance, security, and model risk management practices
, embedding auditability, compliance, and access controls into the ML platform. - Collaborate closely with
Data Science, ML Engineering, Data Engineering, Architecture, and DevOps
teams to design scalable, resilient ML infrastructure. - Stay current with emerging trends, tools, and best practices in
MLOps, LLMOps, cloud platforms, and distributed systems
, driving continuous improvement.
Skills & Experience You Bring to the Table
- 5–8+ years of hands-on experience designing, deploying, and operating
production-grade ML systems
. - Strong programming proficiency in
Python
, with solid Linux fundamentals and working knowledge of Go or Spark. - Deep understanding of the
machine learning lifecycle
, including training, evaluation, deployment, monitoring, and retraining. - Practical experience with
MLOps platforms and tools
such as Kubeflow, MLflow, KServe, and NVIDIA ML toolkits. - Proven experience deploying and optimizing
LLMs in production
, using technologies such as vLLM, TensorRT-LLM, DeepSpeed, or TGI. - Strong experience working with
GPU-based environments
, including performance tuning and cost optimization. - Expertise in
cloud platforms
(AWS, GCP, or Azure), containerization with Docker, and orchestration using Kubernetes. - Hands-on experience with
CI/CD systems
and Infrastructure as Code tools such as Terraform. - Experience with
streaming and messaging technologies
(Kafka or Pulsar) for real-time and event-driven ML pipelines. - Familiarity with
vector databases and retrieval pipelines
supporting RAG-based systems. - Strong software engineering fundamentals, including version control, automated testing, debugging, and operational reliability.
- Excellent communication and collaboration skills, with the ability to work effectively across cross-functional teams.
Mandatory Skills
- MLOps / ML Platform Engineering
- Kubernetes and Docker
- GPU-based model deployment and optimization
- Cloud platforms: AWS and/or GCP
Preferred Skills
- Experience working on
AI/ML or GenAI-driven production systems
- Exposure to
ML governance, compliance, or model risk management frameworks
Education
- BE / BTech / MCA or equivalent degree from a
UGC-accredited university
Excited about this opportunity?
Career at Sirion