Associate Director AI Engineering

10 years

0 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About IKS Health

IKS Health enables the enhanced delivery of exceptional healthcare for today’s practicing clinicians, medical groups and health systems. Supporting healthcare providers through every function of the patient visit, IKS Health is a go-to resource for organizations looking to effectively scale, improve quality and achieve cost savings through integrated technology and forward-thinking solutions. Founded in 2007, we have grown a global workforce of 14,000 employees serving over 150,000 clinicians in many of the largest hospitals, health systems, and specialty groups in the United States.


IKS Health revitalizes the clinician-patient relationship while empowering healthcare organizations to thrive. We take on the chores of healthcare — spanning administrative, clinical, and operational burdens — so that clinicians can focus on their core purpose: delivering great care. Combining pragmatic technology and dedicated experts, our solutions enable stronger, financially sustainable enterprises. By bringing joy and purpose back to medicine, we’re creating transformative value in healthcare and empowering clinicians to build healthier communities.


Job Summary

technical direction, architecture, development, and deployment

strong software engineering fundamentals


Key Responsibilities

  1. AI Infrastructure & Architecture:

    Design, build, and maintain scalable, reliable, and efficient infrastructure for training and deploying machine learning models at scale, including support for foundation models, LLM fine-tuning, and retrieval-augmented generation (RAG). Own end-to-end technical architecture, design reviews, and system design for AI platforms and services.
  2. Team Leadership & Mentorship:

    Lead and mentor a team of AI/ML and software engineers, fostering a culture of engineering excellence, innovation, and collaboration. Guide the team in best practices for software development and MLOps, while remaining hands-on in code, design, and technical problem-solving.
  3. MLOps & Automation:

    Own and drive the MLOps strategy. Implement and manage CI/CD pipelines for machine learning models, automating the entire lifecycle from data preparation to model monitoring and extending pipelines for LLM deployment and monitoring.
  4. Production Deployment:

    Lead the technical efforts to integrate and deploy machine learning models into our production environments, ensuring high availability, low latency, and scalability, including deployment of LLM-powered applications and AI agents.
  5. Cross-Functional Collaboration:

    Partner closely with data scientists, software engineers, architects, and product managers to understand model requirements and translate them into robust engineering solutions. Influence product and platform roadmaps through strong technical input and feasibility assessments.
  6. Performance Optimization:

    Continuously monitor and optimize the performance of our AI systems, including model inference speed, resource utilization, and cost-effectiveness, with a focus on optimizing LLM inference efficiency.
  7. Technology & Tooling:

    Evaluate and select the best tools, frameworks, and technologies for our AI engineering stack. Define and promote engineering standards, reference architectures, and reusable components for AI-driven solutions. Stay current with the latest advancements in the field.
  8. Code Quality & Best Practices:

    Champion and enforce high standards for code quality, testing, security, reliability, and documentation within the AI engineering team.


Qualifications & Skills

  • Educational Background:

    Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field.
  • Professional Experience:

    10+ years of professional software engineering experience, with at least 3–4 years in a technical hands-on leadership role focused on AI/ML engineering or MLOps, including applied experience with large language models (LLMs) and Generative AI in production. Demonstrated track record of designing and delivering complex, distributed, production-grade AI systems.
  • GenAI & Agentic AI Expertise

    : Hands-on experience with frameworks for agentic AI (e.g., LangChain, LangGraph, CrewAI) and vector databases (e.g., FAISS, Pinecone, PostgreSQL, Azure AI Search, etc.), Computer Vision, etc.
  • Software Engineering Excellence:

    Deep expertise in a major programming language (e.g., Python, Java, Go) and a strong foundation in software architecture, data structures, and algorithms. Experience with microservices, APIs, and high-throughput, low-latency systems is a plus.
  • MLOps Expertise:

    Proven, hands-on experience building and managing MLOps pipelines using tools like Kubernetes, Docker, Jenkins, MLflow, Kubeflow, or similar technologies.
  • Cloud Proficiency:

    Extensive experience with at least one major cloud platform (AWS, GCP, or Azure) and its AI/ML services (e.g., SageMaker, Vertex AI, Azure ML).
  • Leadership:

    Demonstrated ability to lead technical teams, mentor engineers, and manage complex engineering projects from conception to completion, balancing strategic thinking with hands-on technical execution.
  • Problem-Solving:

    Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues in distributed systems. Ability to quickly understand business requirements, user needs, and technical constraints. Skilled in converting ambiguous or high-level problems into structured AI/ML solutions.
  • Solution Design & Approach Evaluation

    :
  • Ability to propose multiple solution approaches to any AI/ML or GenAI problem. Strength in evaluating pros, cons, feasibility, scalability, and risk associated with each approach. Expertise in selecting the best-fit solution considering data availability, cost, time, and long-term maintainability.
  • Strong sense for when to use:

Traditional ML vs Deep Learning

Classical NLP vs LLMs

Fine-tuning vs RAG vs prompting strategies


Preferred Qualifications

  • Experience with large-scale data processing technologies (e.g., Apache Spark, Kafka).
  • Familiarity with infrastructure as code (IaC) tools like Terraform or CloudFormation.
  • Experience optimizing deep learning models for inference (e.g., using TensorRT, ONNX) and optimizing LLM inference for latency and cost efficiency.
  • Contributions to open-source AI, GenAI or MLOps projects.


Note: This is work from Office in Navi Mumbai

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You