Posted:2 weeks ago|
Platform:
On-site
Full Time
· 3+ years of experience in machine learning operations or software/platform development.
· Strong experience with Azure ML, Azure DevOps, Blob Storage, and containerized model deployments on Azure.
· Strong knowledge of programming languages commonly used in AI/ML, such as Python, R, or C++.
· Experience with Azure cloud platform, machine learning services, and best practices. Roles:
· Design, develop, and maintain complex, high-performance, and scalable MLOps systems that interact with AI models and systems.
· Cooperate with cross-functional teams, including data scientists, AI researchers, and AI/ML engineers, to understand requirements, define project scope, and ensure alignment with business goals
· Offer technical leadership and expertise in choosing, evaluating, and implementing software technologies, tools, and frameworks in a cloud-native (Azure + AML) environment. · Troubleshoot and resolve intricate software problems, ensuring optimal performance and reliability when interfacing with AI/ML systems.
· Participate in software development project planning and estimation, ensuring efficient resource allocation and timely solution delivery.
· Contribute to the development of continuous integration and continuous deployment (CI/CD) pipelines.
· Contribute to the development of high-performance data pipelines, storage systems, and data processing solutions.
· Drive integration of GenAI models (e.g., LLMs, foundation models) in production workflows, including prompt orchestration and evaluation pipelines.
· Support edge deployment use cases via model optimization, conversion (e.g., to ONNX, TFLite), and containerization for edge runtimes.
· Contribute to the creation and maintenance of technical documentation, including design specifications, API documentation, data models, data flow diagrams, and user manuals.
· Experience with machine learning frameworks such as TensorFlow, PyTorch, or Keras.
· Experience with version control systems, such as Git, and CI/CD tools, such as Jenkins, GitLab CI/CD, or Azure DevOps.
· Knowledge of containerization technologies like Docker and Kubernetes, and infrastructures-code tools such as Terraform or Azure Resource Manager (ARM) templates. · Experience with Generative AI workflows, including prompt engineering, LLM fine-tuning, or retrieval-augmented generation (RAG).
· Exposure to GenAI frameworks: LangChain, LlamaIndex, Hugging Face Transformers, OpenAI API integration.
· Experience deploying optimized models on edge devices using ONNX Runtime, TensorRT, OpenVINO, or TFLite.
· Hands-on with monitoring LLM outputs, feedback loops, or LLMOps best practices.· Familiarity with edge inference hardware like NVIDIA Jetson, Intel Movidius, or ARM CortexA/NPU devices.
Interested Candidates can also share the resumes on shraddha@careerstone.in
Note:- Candidates with maximum 1 month Notice period are preferred.
Career Stone Consultant
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowSalary: Not disclosed
gurugram, haryana, india
Salary: Not disclosed
Salary: Not disclosed
gurugram, haryana, india
Salary: Not disclosed