AI/ML Solution Architect

10 years

0 Lacs

Posted:10 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Role: Solution Architect-AI/ML

Experience: 10+ years

Location: Hyderabad & Pune


Key Responsibilities:


Architecture & Infrastructure

  • Design, implement, and optimize end-to-end ML training workflows including infrastructure setup, orchestration, fine-tuning, deployment, and monitoring.
  • Evaluate and integrate multi-cloud and single-cloud training options across AWS and other major platforms.
  • Lead cluster configuration, orchestration design, environment customization, and scaling strategies.
  • Compare and recommend hardware options (GPUs, TPUs, accelerators) based on performance, cost, and availability.


Performance & Optimization

  • Conduct performance benchmarking, hardware comparisons, and cost-performance trade-off analysis.
  • Implement real-time monitoring and control systems with metrics collection, observability, and custom performance tracking.
  • Optimize cost models, budget predictability, and resource utilization.


Data & Training Pipelines

  • Architect and validate data pipelines with storage, persistence, and throughput optimization.
  • Oversee data quality validation, pre-processing, and long-term experiment tracking.
  • Support framework flexibility for diverse training techniques (supervised, unsupervised, fine-tuning, reinforcement learning).


Integration & Deployment

  • Ensure seamless deployment across multi-cloud environments with security, compliance, and regional availability considerations.
  • Collaborate with DevOps and MLOps teams for automation, fault tolerance, job scheduling, and orchestration testing.
  • Provide technical guidance on integration with existing enterprise systems.


Analysis & Recommendations

  • Lead result analysis, insight generation, and actionable recommendations for training performance and user experience improvements.
  • Present performance claims, benchmarking reports, and speculative decoding insights to stakeholders.


Technical Expertise Requirements

  • 10+ years in architecture roles with at least 5 years in AI/ML infrastructure and large-scale training environments.
  • Expert in AWS cloud services (EC2, S3, EKS, SageMaker, Batch, FSx, etc.) and familiar with Azure, GCP, and hybrid/multi-cloud setups.
  • Strong knowledge of AI/ML training frameworks (PyTorch, TensorFlow, Hugging Face, DeepSpeed, Megatron, Ray, etc.).
  • Proven experience with cluster orchestration tools (Kubernetes, Slurm, Ray, SageMaker, Kubeflow).
  • Deep understanding of hardware architectures for AI workloads (NVIDIA, AMD, Intel Habana, TPU).


Performance & Cost Management

  • Demonstrated expertise in performance benchmarking, reliability testing, and training speed optimization.
  • Skilled in cost modeling, budget forecasting, and cost-performance balancing.


Monitoring & Observability

  • Experience with real-time monitoring tools (Prometheus, Grafana, CloudWatch) and custom metric instrumentation.
  • Familiarity with network performance testing, regional load testing, and multi-region deployment strategies.


Soft Skills

  • Strong problem-solving skills with an analytical mindset.
  • Excellent communication skills to present technical trade-offs and strategic recommendations to executives and engineering teams.
  • Ability to lead cross-functional teams and drive innovation in AI infrastructure.


Other Required Skills:


LLM Inference Optimization

  • Expert knowledge of inference optimization techniques including speculative decoding, KV cache optimization (MQA/GQA/PagedAttention), and dynamic batching.
  • Deep understanding of prefill vs decode phases, memory-bound vs compute-bound operations.
  • Experience with quantization methods (INT4/INT8, GPTQ, AWQ) and model parallelism strategies.


Inference Frameworks

  • Hands-on experience with production inference engines: vLLM, TensorRT-LLM, DeepSpeed-Inference, or TGI.
  • Proficiency with serving frameworks: Triton Inference Server, KServe, or Ray Serve.
  • Familiarity with kernel optimization libraries (FlashAttention, xFormers).


Performance Engineering

  • Proven ability to optimize inference metrics: TTFT (first token latency), ITL (inter-token latency), and throughput.
  • Experience profiling and resolving GPU memory bottlenecks and OOM issues.
  • Knowledge of hardware-specific optimizations for modern GPU architectures (A100/H100).


System Architecture

  • Design scalable inference systems meeting strict latency SLAs and throughput requirements.
  • Implement production patterns for request routing, load balancing, and model versioning.
  • Balance trade-offs between latency, throughput, cost per token, and model accuracy.

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Fission Labs logo
Fission Labs

Software Development

Sunnyvale CA

RecommendedJobs for You

noida, greater noida, delhi / ncr