Job
Description
The opportunity
As an AI Engineer, part of our EY-DET-FS team, you will be responsible for developing and implementing AI-powered solutions and GenAI applications . You will work hands-on with Large Language Models (LLMs) , Retrieval-Augmented Generation (RAG) frameworks, and cloud-native AI tools to deliver intelligent, scalable, and secure enterprise solutions. Youll collaborate closely with AI Architects, Cloud Engineers, and Product Owners to build and deploy next-generation AI services that transform business processes and enhance user experiences. Your Technical Responsibilities Develop and implement AI and GenAI-based microservices and APIs leveraging frameworks such as LangChain, LlamaIndex, or Semantic Kernel . Develop and integrate LLM-based solutions (OpenAI, Anthropic, Mistral, Azure OpenAI, etc.) for real-world enterprise use cases. Build and maintain RAG (Retrieval-Augmented Generation) pipelines involving embedding generation, document chunking, and vector search using tools such as FAISS, Pinecone, or Weaviate . Implement prompt engineering, fine-tuning, and model evaluation techniques for optimizing responses and accuracy. Deploy AI models and services using cloud AI platforms (AWS Bedrock, Azure AI, Vertex AI). Work with data pipelines to preprocess, clean, and structure data for model ingestion. Develop API interfaces for integrating AI services into existing enterprise platforms and applications. Ensure strong adherence to DevOps/MLOps practices, including versioning, CI/CD automation, monitoring, and rollback strategies. Collaborate with the architecture and DevOps teams for containerized deployments using Docker and Kubernetes. Implement logging, monitoring, and performance tracking for deployed AI models and APIs. Continuously explore emerging AI frameworks, open-source models, and new deployment patterns to enhance solution design. Your Management & Collaboration Responsibilities: Collaborate with AI Architects and cross-functional teams to convert solution blueprints into implementable modules. Participate in technical discussions, design reviews, and sprint planning to ensure smooth delivery. Support project managers in defining realistic timelines and technical dependencies. Maintain high-quality documentation for models, APIs, and data pipelines. Contribute to proofs of concept (POCs) and internal accelerators showcasing new AI capabilities. Assist in evaluating third-party tools, APIs, and frameworks for AI adoption within enterprise systems. Your People Responsibilities (If Applicable): Support peer learning through code reviews, internal demos, and technical discussions. Share best practices in AI coding standards, prompt design, and data preparation. Requirements (Qualifications): Education: BE/BTech/MCA with 48 years of total experience, including 12 years of relevant AI/ML or GenAI project experience . Mandatory
Skills: Programming: Python (preferred), Java, or Node.js AI/ML Frameworks: LangChain, LlamaIndex, Hugging Face Transformers, OpenAI API, Azure OpenAI, or Anthropic API LLM Expertise: Experience with GPT, Claude, Llama, Mistral, or other open-source LLMs RAG Frameworks: Pinecone, FAISS, Chroma, or Weaviate Cloud AI Platforms: AWS Bedrock, Azure Cognitive Services, Google Vertex AI APIs & Integration: REST/gRPC APIs, Swagger/OpenAPI, Postman Data & Storage: MySQL, MongoDB, Redis, or vector stores DevOps/MLOps: Git, Docker, Kubernetes, CI/CD (GitHub Actions, Jenkins), MLflow (nice to have) Testing: PyTest, Postman, and API-level testing Version Control: Git/GitHub, Bitbucket Preferred
Skills: Experience with prompt optimization and evaluation Familiarity with LangGraph or CrewAI for agentic workflows Basic knowledge of transformer architecture internals Experience working with data pipelines (Airflow, Prefect) Awareness of responsible AI and model governance principles Agile/DevOps delivery experience