Join Delphi - Where Innovation meets transformation
At Delphi, we believe in creating an environment where our people thrive. Our
hybrid work model
empowers you to choose where you workwhether it's from the office, your home, or a mix of bothso you can prioritize what matters most. We are committed to supporting your personal goals, family, and overall well-being while driving transformative results for our clients.We welcome exceptional talent from anywhere across the globe. Interviews and onboarding are conducted virtually, reflecting our digital-first mindset.Rooted in the region, we specialize in delivering tailored, impactful solutions in
Data, Advanced Analytics and AI, Infrastructure, Cloud Security, and Application Modernization.
Whether it's enabling
predictive analytics
, transforming operations with automation, or driving customer engagement with intelligent platforms, we are the trusted partner for organizations ready to embrace a smarter, more efficient future.
About The Role
We are looking for a Senior AI/ML Solution Architect with deep expertise in Generative AI and agentic systems to lead the design and implementation of enterprise-scale AI solutions. This role requires a unique blend of hands-on technical expertise in both Large Language Models (LLMs) and Small Language Models (SLMs), combined with the architectural vision to deploy these solutions across diverse computing environments. The ideal candidate will architect scalable agentic solutions, implement advanced fine-tuning strategies, and design comprehensive integration systems that connect AI capabilities with enterprise applications. You will be at the forefront of our AI transformation initiatives, working with cutting-edge technologies while maintaining a practical approach to deployment and optimization.
Job Responsibilities
Architecture & Design
- Design and architect scalable agentic solutions using advanced LLM capabilities.
- Implement Model Context Protocol (MCP) integrations to connect applications with diverse external services and APIs.
- Develop multi-agent orchestration systems for complex workflow automation.
- Design context and memory management systems for persistent agent interactions.
Technical Implementation
- Build and optimize Retrieval-Augmented Generation (RAG) systems for efficient knowledge retrieval
- Implement agent frameworks (LangChain, LangGraph, Semantic Kernel, Agno) for various deployment environments
- Design and deploy model inference pipelines optimized for different computing environments (cloud, edge, on-premises)
- Develop comprehensive fine-tuning strategies for both Large Language Models (LLMs) and Small Language Models (SLMs)
- Architect SLM deployment strategies for resource-constrained environments
- Implement model compression and quantization techniques for efficient inference
Integration & Connectivity
- Architect REST/gRPC/GraphQL APIs and SDK integrations for seamless service connectivity
- Implement event-driven architectures using webhooks and message buses
- Design secure authentication and authorization systems (SSO/OIDC)
- Build connectors for popular platforms (Slack, Jira, Salesforce, CRM/ERP systems)
Data & Model Management
- Design comprehensive data preprocessing pipelines including cleaning, deduplication, and PII reduction
- Implement embedding creation and re-embedding strategies for optimal retrieval
- Develop chunking and windowing strategies for mobile-optimized content processing
- Establish model selection criteria and evaluation frameworks
Job Requirements
Core AI/ML Expertise
- Foundation Models: Deep experience with GPT-4, Claude, LLaMA, and other state-of-the-art LLMs
- Small Language Models (SLMs): Expertise in deploying and optimizing SLMs (Phi-3, Gemma, TinyLlama) for mobile environments
- Agent Frameworks: Proficiency in LangChain, LangGraph, Microsoft Semantic Kernel, Agno, and custom agent development
- RAG Systems: Advanced knowledge of retrieval-augmented generation, vector databases, and semantic search.
Fine-tuning & Adaptation
- Advanced fine-tuning techniques: LoRA/QLoRA, DoRA, AdaLoRA for parameter-efficient training
- Model compression: Pruning, quantization (INT8/INT4), knowledge distillation
- Prompt-tuning, adapters, prefix tuning, and P-tuning v2 methodologies
- RLHF/RLAIF techniques for alignment and preference learning
- Domain-specific fine-tuning for mobile use cases and vertical applications
Deployment & Optimization
- SLM Deployment: Expertise in deploying Small Language Models across various computing environments
- Multi-Platform Optimization: Experience optimizing both LLMs and SLMs for cloud, edge, and onpremises deployment
- Efficient Inference: Knowledge of quantization (GPTQ, AWQ, GGML), pruning, and distillation techniques
- Model Compression: Advanced techniques for reducing model size while maintaining performance
- Real-time Processing: Expertise in streaming inference and adaptive reasoning depth control
- Performance Optimization: Proficiency in autoscaling, rate limiting, and resource management
Adaptive Fine-tuning
- Environment-specific model adaptation and optimization
- Federated learning approaches for distributed fine-tuning
- Few-shot and zero-shot learning techniques for resource-efficient adaptation
Integration Technologies
- MCP Implementation: Deep understanding of Model Context Protocol for service integration
- API Development: Expertise in designing and implementing REST, gRPC, and GraphQL APIs
- Event Systems: Experience with event buses, webhooks, and real-time communication
- Security: Knowledge of secure storage, caching, and access control systems
Development Frameworks
- Libraries: TensorFlow, PyTorch, Hugging Face Transformers, LlamaIndex
- Application Development: Web frameworks, desktop applications, API development
- Cloud Platforms: AWS, GCP, Azure with focus on AI/ML services
- DevOps: CI/CD pipelines, containerization (Docker/Kubernetes), monitoring
Preferred Qualifications
- Master's or PhD in Computer Science, AI, Machine Learning, or related field
- Published research or contributions to open-source AI/ML projects
- Experience with multi-modal models and cross-modal applications
- Knowledge of MLOps best practices and model lifecycle management
- Experience with regulatory compliance in AI systems (GDPR, AI Act, etc.)
- Track record of leading AI transformation initiatives in enterprise environments
- Certifications in cloud platforms (AWS, GCP, Azure) with focus on AI/ML services
Technical Competencies to Be Assessed
- System design and architecture for distributed AI systems
- Code review and optimization for production AI deployments
- Performance benchmarking and model evaluation methodologies
- Cost optimization strategies for large-scale AI deployments
- Security and privacy considerations in AI systems
- Scalability patterns for AI applications
What we offerAt Delphi, we are dedicated to creating an environment where you can thrive, both professionally and personally. Our
competitive compensation package, performance-based incentives,
and health benefits are designed to ensure you're well-supported. We believe in your continuous growth and offer
company-sponsored certifications, training programs
, and skill-building opportunities to help you succeed.We foster a culture of inclusivity and support, with
remote work
options and a fully supported work-from-home setup to ensure your comfort and productivity. Our positive and inclusive culture includes team activities, wellness and mental health programs to ensure you feel supported.