Posted:3 hours ago|
Platform:
On-site
Full Time
As part of our AI-first strategy at Creatrix Campus, you’ll play a critical role in deploying, optimizing, and maintaining Large Language Models (LLMs) like LLaMA, Mistral, and CodeS across our SaaS platform. This role is not limited to experimentation—it is about operationalizing AI at scale. You’ll ensure our AI services are reliable, secure, cost-effective, and product-ready for higher education institutions in 25+ countries.
You’ll work across infrastructure (cloud and on-prem), MLOps, and performance optimization while collaborating with software engineers, AI developers, and product teams to embed LLMs into real-world applications like accreditation automation, intelligent student forms, and predictive academic advising.
● Deploy, fine-tune, and optimize open-source LLMs (e.g., LLaMA, Mistral, CodeS, DeepSeek).
● Implement quantization (e.g., 4-bit, 8-bit) and pruning for efficient inference on commodity hardware.
● Build and manage inference APIs (REST/gRPC) for production use.
● Set up and manage on-premise GPU servers and VM-based deployments.
● Build scalable cloud-based LLM infrastructure using AWS (SageMaker, EC2), Azure ML, or GCP Vertex AI.
● Ensure cost efficiency by choosing appropriate hardware and job scheduling strategies.
● Develop CI/CD pipelines for model training, testing, evaluation, and deployment.
● Integrate version control for models, data, and hyperparameters.
● Set up logging, tracing, and monitoring tools (e.g., MLflow, Prometheus, Grafana) for model performance and failure detection.
● Ensure data privacy (FERPA/GDPR) and enforce security best practices across deployments.
● Apply secure coding standards and implement RBAC, encryption, and network hardening for cloud/on-prem.
● Work closely with AI solution engineers, backend developers, and product owners to integrate LLM services into the platform.
● Support performance benchmarking and A/B testing of AI features across modules.
● Document LLM pipelines, configuration steps, and infrastructure setup in internal playbooks.
● Create guides and reusable templates for future deployments and models.
● Strong Python experience with ML libraries (e.g., PyTorch, Hugging Face Transformers).
● Familiar with LangChain, LlamaIndex, or other RAG frameworks.
● Experience with Docker, Kubernetes, and API gateways (e.g., Kong, NGINX).
● Working knowledge of vector databases (FAISS, Pinecone, Qdrant).
● Familiarity with GPU deployment tools (CUDA, Triton Inference Server, HuggingFace Accelerate).
● 4+ years in an AI/MLOps role, including experience in LLM fine-tuning and deployment.
● Hands-on work with model inference in production environments (both cloud and on-prem).
● Exposure to SaaS and modular product environments is a plus.
● Bachelor’s or Master’s in Computer Science, AI/ML, Data Engineering, or related field.
Anubavam
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowchennai, tamil nadu, india
Salary: Not disclosed
surat
6.0 - 6.0 Lacs P.A.
gāndhīnagar
1.80696 - 6.6726 Lacs P.A.
india
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
Experience: Not specified
Salary: Not disclosed
india
Experience: Not specified
Salary: Not disclosed
kolkata, west bengal
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed