Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
5 - 10 Lacs
Gurgaon, Haryana, India
On-site
Build and own the full voice bot pipeline including ASR, NLU, dialog management, tool calling, and TTS. Architect systems using MCP to connect ASR, memory, APIs, and LLMs in real-time. Implement RAG to ground responses using data from knowledge bases, inventory, and FAQs. Design scalable vector search systems for memory embedding and FAQ handling. Engineer low-latency ASR and TTS pipelines, optimizing for natural turn-taking. Apply fine-tuning, LoRA, and instruction tuning to reduce hallucinations and align model tone. Build observability systems and QA pipelines to monitor calls and analyze model behavior. Collaborate with cross-functional teams to scale the voice bot to thousands of users. Design modular, observable, and resilient AI systems. Implement retrieval pipelines, function calls, and prompt chaining across workflows. Expertly chunk, embed, and retrieve documents in RAG systems. Debug latency issues and optimize for low round-trip time. Trace hallucinations to root causes and fix via guardrails or tool access. Build prototypes using open-source or hosted tools with speed and flexibility. 5+ years in AI/ML or voice/NLP with real-time experience. Deep knowledge of LLM orchestration, vector search, and prompt engineering. Experience with ASR (Whisper, Deepgram), TTS (ElevenLabs, Coqui), and OpenAI models. Skilled in latency optimization and real-time audio pipelines. Hands-on with Python, FastAPI, vector DBs, and cloud platforms.
Posted 2 months ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru, Karnataka, India
On-site
What You ll Do Design & Build: Develop mutli-agent AI systems for the UCaaS platform, focusing on NLP, speech recognition, audio intelligence and LLM powered interactions. Rapid Experiments: Prototype with open-weight models (Mistral, LLaMA, Whisper, etc.) and scale what works. Code for Excellence: Write robust code for AI/ML libraries and champion software best practices. Optimize for Scale & Cost: Engineer scalable AI pipelines, focusing on latency, throughput, and cloud costs. Innovate with LLMs: Fine-tune and deploy LLMs for summarization, sentiment and intent detection, RAG pipelines, multi-modal inputs and multi-agentic task automation. Own the Stack: Lead multi-agentic environments from data to deployment and scale. Collaborate & Lead: Integrate AI with cross-functional teams and mentor junior engineers. What You Bring Experience:6-10 yearsof professional experience, with a mandatory minimum of 2 years dedicated to a hands-on role in a real-world, production-level AI/ML project. Coding & Design: Expert-level programming skills inPythonand proficiency in designing and building scalable, distributed systems. ML/AI Expertise: Deep, hands-on experience with coreML/AI libraries and frameworks, Agentic Systems, RAG pipelines Hands-on experience in usingVector DBs LLM Proficiency: Proven experience working with and fine-tuning Large Language Models (LLMs). Scalability & Optimization Mindset: Demonstrated experience in building and scaling AI services in the cloud, with a strong focus on performance tuning and cost optimization of agents specifically. Nice to Have Youve tried outagent frameworkslike LangGraph, CrewAI, or AutoGen and can explain the pros and cons of autonomous vs. orchestrated agents. Experience with MLOps tools and platforms (e.g., Kubeflow, MLflow, Sagemaker). Real-time streaming AI experience token-level generation, WebRTC integration, or live transcription systems Contributions to open-source AI/ML projects or a strong public portfolio (GitHub, Kaggle).
Posted 2 months ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
We are looking for exceptional individuals to join our team at ScalePad as Head of AI Engineering. ScalePad is a prominent software-as-a-service (SaaS) company operating globally to provide Managed Service Providers (MSPs) with the tools and support needed to enhance client value in the ever-evolving IT landscape. As a member of our tech-management team, you will lead AI development initiatives, shape our AI strategy, and guide teams in creating impactful AI applications. This hands-on leadership role involves mentoring teams, improving developer productivity, and ensuring best practices in AI development, software engineering, and system design. Your responsibilities will include designing state-of-the-art AI applications, leveraging advanced techniques such as Machine Learning (ML), Large Language Models (LLMs), Graph Neural Networks (GNNs), and Retrieval-Augmented Generation (RAG). You will also focus on fostering an environment of responsible AI practices, governance, and ethics, advocating for AI-first product thinking, and collaborating with various teams to align AI solutions with business objectives. To excel in this role, you should possess strong technical expertise in AI, ML, software architecture principles, and have a proven track record of integrating AI advancements into engineering execution. Additionally, experience in AI governance, ethics, and managing globally distributed teams will be essential. We are seeking a curious, hands-on leader who is passionate about developing talent, driving innovation, and ensuring AI excellence within our organization. Joining our team at ScalePad will offer you the opportunity to lead the evolution of AI-driven products, work with cutting-edge technologies, and make a global impact by influencing AI-powered decision-making at an enterprise level. As a Rocketeer, you will enjoy ownership through our Employee Stock Option Plan (ESOP), benefit from annual training and development opportunities, and work in a dynamic, entrepreneurial setting that promotes growth and stability. If you are ready to contribute to a culture of innovation, collaboration, and success, we invite you to apply for this role. Please note that only candidates eligible to work in Canada will be considered. At ScalePad, we are committed to fostering Diversity, Equity, Inclusion, and Belonging (DEIB) to create a workplace where every individual's unique experiences and perspectives are valued. Join us in building a stronger, more inclusive future where everyone has the opportunity to thrive and grow.,
Posted 2 months ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
3+ Years experience in Gen AI, AI Engineering, Solution Architecture Gen AI Specialist strong experience in LLM, AI engineering, AI project deployment & production, Gen AI innovation, Gen AI architecture, Gen AI based use case design, build, deploy & scale at clients. 2 years + Strong experience in AI, ML, Data Science. Solutioning, Architecture, Build, Deploy, ML Ops, Production, Managed Services Top Engineering Colleges Preferred Prefer is resource is from tech startup. Consulting and Tech Firms are also good. Cloud Firms too if we can afford Full Stack Tech Skills i.e. Front End, Middle End, Back End, Custom Apps, Automation will be a big bonus Experience in Solution Architecture for projects, AI Project Delivery hands on,
Posted 2 months ago
5.0 - 8.0 years
27 - 30 Lacs
Mumbai, Pune, Bengaluru
Work from Office
:Mandatory Skills: 1. 5+ Years of experience in the design & development of state-of-the-art language models; utilize off-the-shelf LLM services, such as Azure OpenAI, to integrate LLM capabilities into applications. 2. Deep understanding of language models and a strong proficiency in designing and implementing RAG-based workflows to enhance content generation and information retrieval. 3. Experience in building, customizing and fine-tuning LLMs via OpenAI studio extended through Azure OpenAI cognitive services for rapid PoCs 4. Proven track record of successfully deploying and optimizing LLM models in the cloud (AWS, Azure, or GCP) for inference in production environments and proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization. 5. Apply prompt engineering techniques to design refined and contextually relevant prompts for language models. 6. Monitor and analyze the performance of LLMs by experimenting with various prompts, evaluating results, and refining strategies accordingly. 7. Building customizable, conversable AI agents for complex tasks using CrewAI and LangGraph to enhance Gen AI solutions 8. Proficient in MCP (Model Context Protocol) for optimizing context-aware AI model performance and integration is a plus Location - Mumbai , Pune, Bangalore, Chennai and Noida
Posted 2 months ago
6.0 - 10.0 years
25 - 30 Lacs
Hyderabad
Work from Office
We seek a Senior AI Scientist with strong ML fundamentals and data engineering expertise to lead the development of scalable AI/LLM solutions. You will design, fine-tune, and deploy models (e.g., LLMs, RAG architectures) while ensuring robust data pipelines and MLOps practices. Key Responsibilities 1. AI/LLM Development: o Fine-tune and optimize LLMs (e.g., GPT, Llama) and traditional ML models for production. o Implement retrieval-augmented generation (RAG), vector databases, and orchestration tools (e.g., LangChain). 2. Data Engineering: o Build scalable data pipelines for unstructured/text data (e.g., Spark, Kafka, Airflow). o Optimize storage/retrieval for embeddings (e.g., pgvector, Pinecone). 3. MLOps & Deployment: o Containerize models (Docker) and deploy on cloud (AWS/Azure/GCP) using Kubernetes. o Design CI/CD pipelines for LLM workflows (experiment tracking, monitoring). 4. Collaboration: o Work with DevOps to optimize latency/cost trade-offs for LLM APIs. o Mentor junior team members on ML engineering best practices. Required Skills & Qualifications Education: MS/PhD in CS/AI/Data Science (or equivalent experience). Experience: 6+ years in ML + data engineering, with 2+ years in LLM/GenAI projects.
Posted 2 months ago
4.0 - 9.0 years
6 - 11 Lacs
Chennai, Perungudi
Work from Office
Job Summary: We are seeking a versatile QA & AIOps Engineer with a minimum of 5 years of hands-on experience in both AI Ops implementation and intelligent QA automation. The ideal candidate will bridge the gap between DevOps, QA, and AI engineering ensuring robust infrastructure, AI agent deployment, intelligent testing, and automated monitoring for scalable AI systems. This role requires strong technical depth in cloud-native deployment (AKS/EKS), CI/CD pipelines, and AI/ML toolchains. Key Responsibilities: AI Ops & DevOps Engineering: Deploy and manage AI agents in scalable production environments (Azure/AWS). Set up and maintain Kubernetes clusters (AKS/EKS), Docker containers, and Ingress Controllers (NGINX/Traefik). Integrate platforms like Azure AI Foundry or AWS Bedrock into production pipelines. Implement auto-scaling policies using tools like KEDA or Horizontal Pod Autoscaler. Promote use of Vector Databases (e.g., Pinecone, Weaviate) for AI workflows. Configure infrastructure security: secrets management, access controls, and network policies. Use Helm charts to manage complex deployments and upgrades of AI services. Troubleshoot production issues, identify bottlenecks, and ensure high availability of AI workloads. Collaborate with SREs, DevOps, and ML engineers to ensure seamless agent deployment, monitoring, and scaling. QA & Intelligent Testing: Design, develop, and execute test cases (manual and automated) for web, API, and backend systems. Implement AI-driven test strategies (e.g., test prioritization, risk-based testing). Create and maintain automated test scripts using tools like JUnit, PyTest, and JMeter. Integrate QA automation into CI/CD pipelines with anomaly detection and predictive analytics. Log and manage bugs; track issues to closure with development and DevOps teams. Use AI-powered QA tools such as Test.ai, Functionize for test optimization. Monitor AIOps dashboards to validate intelligent issue detection and auto-remediation flows. Required Skills & Qualifications: 5+ years of combined experience in QA automation, DevOps, or AI Ops roles. Hands-on expertise with Kubernetes (AKS/EKS), Docker, Helm, CI/CD pipelines. Experience integrating AI/ML tools such as Azure AI Foundry, AWS Bedrock. Knowledge of AI Ops platforms, observability tools, and monitoring dashboards. Proficient in Python, Bash, or JavaScript. Familiarity with Vector DBs, AI observability, and security practices. Strong collaboration skills with cross-functional DevOps, QA, and AI engineering teams.
Posted 2 months ago
5.0 - 7.0 years
7 - 9 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Job Summary: We are seeking a passionate and skilled AI Engineer to design, develop, and deploy cutting-edge AI solutions across domains such as large language models (LLMs), computer vision, and autonomous agent workflows. You will collaborate with data scientists, researchers, and engineering teams to build intelligent systems that solve real-world problems using deep learning, transformer-based architectures, and multi-modal AI models. Key Responsibilities: Design and implement AI/ML models, especially transformer-based LLMs (e.g., BERT, GPT, LLaMA) and vision models (e.g., ViT, YOLO, Detectron2). Develop and deploy computer vision pipelines for object detection, segmentation, OCR, and image classification tasks. Build and orchestrate intelligent agent workflows using prompt engineering, memory systems, retrieval-augmented generation (RAG), and multi-agent coordination. Fine-tune and optimize pre-trained models on domain-specific datasets using frameworks like PyTorch or TensorFlow. Collaborate with cross-functional teams to understand problem requirements and translate them into scalable AI solutions. Implement inference pipelines and APIs to serve AI models efficiently using tools such as FastAPI, ONNX, or Triton Inference Server. Conduct model evaluation, benchmarking, A/B testing, and performance tuning. Stay updated with state-of-the-art research in deep learning, generative AI, and multi-modal learning. Ensure reproducibility, versioning, and documentation of all experiments and production models. Qualifications: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 35 years of hands-on experience in designing and deploying deep learning models. Strong knowledge of LLMs (e.g., GPT, BERT, T5), Vision Models (e.g., CNNs, Vision Transformers), and Computer Vision techniques. Experience building intelligent agents or using frameworks like LangChain, Haystack, AutoGPT, or similar. Proficiency in Python, with expertise in libraries such as PyTorch, TensorFlow, Hugging Face Transformers, OpenCV, and Scikit-learn. Familiarity with MLOps concepts and deployment tools (Docker, Kubernetes, MLflow). Strong understanding of NLP, image processing, model fine-tuning, and optimization. Experience with cloud platforms (AWS, GCP, Azure) and GPU environments. Excellent problem-solving, communication, and teamwork skills. Preferred Qualifications: Experience in building multi-modal AI systems (e.g., combining vision + language models). Exposure to real-time inference systems and low-latency model deployment. Contributions to open-source AI projects or research publications. Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and RAG pipelines. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India
Posted 3 months ago
10.0 - 14.0 years
15 - 20 Lacs
Noida
Work from Office
Seniority : Senior Description & Requirements Position Summary The Senior AI Engineer with GenAI expertise is responsible for developing advanced technical solutions, integrating cutting-edge generative AI technologies. This role requires a deep understanding of modern technical and cloud-native practices, AI, DevOps, and machine learning technologies, particularly in generative models. You will support a wide range of customers through the Ideation to MVP journey, showcasing leadership and decision-making abilities while tackling complex challenges. Key Responsibilities Technical & Engineering Leadership Develop solutions leveraging GenAI technologies, integrating advanced AI capabilities into cloud-native architectures to enhance system functionality and scalability. Lead the design and implementation of GenAI-driven applications, ensuring seamless integration with microservices and container-based environments. Create solutions that fully leverage the capabilities of modern microservice and container-based environments running in public, private, and hybrid clouds. Contribute to HCL thought leadership across the Cloud Native domain with an expert understanding of open-source technologies (e.g., Kubernetes/CNCF) and partner technologies. Collaborate on joint technical projects with partners, including Google, Microsoft, AWS, IBM, Red Hat, Intel, Cisco, and Dell/VMware. Service Delivery Engineer innovative GenAI solutions from ideation to MVP, ensuring high performance and reliability within cloud-native frameworks. Optimize AI models for deployment in cloud environments, balancing efficiency and effectiveness to meet client requirements and industry standards. Assess existing complex solutions and recommend appropriate technical treatments to transform applications with cloud-native/12-factor characteristics. Refactor existing solutions to implement a microservices-based architecture. Innovation & Initiative Drive the adoption of cutting-edge GenAI technologies within cloud-native projects, spearheading initiatives that push the boundaries of AI integration in cloud services. Engage in technical innovation and support HCLs position as an industry leader. Author whitepapers, blogs, and speak at industry events. Maintain hands-on technical credibility, stay ahead of industry trends, and mentor others. Client Relationships Provide expert guidance to clients on incorporating GenAI and machine learning into their cloud-native systems, ensuring best practices and strategic alignment with business goals. Conduct workshops and briefings to educate clients on the benefits and applications of GenAI, establishing strong, trust-based relationships. Perform a trusted advisor role, contributing to technical projects (PoCs and MVPs) with a strong focus on technical excellence and on-time delivery. Mandatory Skills & Experience A passionate developer with 10+ years of experience in Java, Python, Node.js, and Spring programming, comfortable working as part of a paired/balanced team. Extensive experience in software development, with significant exposure to AI/ML technologies. Expertise in GenAI frameworks: Proficient in using GenAI frameworks and libraries such as LangChain, OpenAI API, Gemini, and Hugging Face Transformers. Prompt engineering: Experience in designing and optimizing prompts for various AI models to achieve desired outputs and improve model performance. Strong understanding of NLP techniques and tools, including tokenization, embeddings, transformers, and language models. Proven experience developing complex solutions that leverage cloud-native technologiesfeaturing container-based, microservices-based approaches; based on applying 12-factor principles to application engineering. Exemplary verbal and written communication skills (English). Positive and solution-oriented mindset. Solid experience delivering Agile and Scrum projects in a Jira-based project management environment. Proven leadership skills and the ability to inspire and manage teams. Desired Skills & Experience Machine Learning Operations (MLOps): Experience in deploying, monitoring, and maintaining AI models in production environments using MLOps practices. Data engineering for AI: Skilled in data preprocessing, feature engineering, and creating pipelines to feed AI models with high-quality data. AI model fine-tuning: Proficiency in fine-tuning pre-trained models on specific datasets to improve performance for specialized tasks. AI ethics and bias mitigation: Knowledgeable about ethical considerations in AI and experienced in implementing strategies to mitigate bias in AI models. Knowledgeable about vector databases, LLMs, and SMLs, and integrating with such models. Proficient with Kubernetes and other cloud-native technologies, including experience with commercial Kubernetes distributions (e.g., Red Hat OpenShift, VMware Tanzu, Google Anthos, Azure AKS, Amazon EKS, Google GKE). Deep understanding of core practices including DevOps, SRE, Agile, Scrum, XP, Domain-Driven Design, and familiarity with the CNCF open-source community. Recognized with multiple cloud and technical certifications at a professional level, ideally including AI/ML specializations from providers like Google, Microsoft, AWS, Linux Foundation, IBM, or Red Hat. Verifiable Certification At least one recognized cloud professional / developer certification (AWS/Google/Microsoft)
Posted 3 months ago
3.0 - 8.0 years
4 - 8 Lacs
Mumbai, Hyderabad, Bengaluru
Work from Office
We are looking for a skilled AI Engineer with 3 to 8 years of experience in software engineering or machine learning to design, implement, and productionize LLM-powered agents that solve real-world enterprise problems. This position is based in Kolkata. Roles and Responsibility Architect and build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark and iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Job Requirements Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go is a bonus. Hands-on experience with at least one LLM/agent framework and platform (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar).
Posted 3 months ago
8.0 - 10.0 years
40 - 45 Lacs
bengaluru
Work from Office
Responsibilities: ML Architecture, Strategy & Innovation Define and drive the ML strategy across business domains, identifying high-impact opportunities for automation, optimization, and prediction. Architect scalable ML systems and reusable frameworks that support real-time inference, batch processing, and continuous learning. Lead the evaluation and adoption of cutting-edge ML techniques (e.g., foundation models, causal inference, reinforcement learning) to solve complex business problems. End-to-End ML Lifecycle Ownership Lead the design, development, and deployment of advanced supervised and unsupervised models for use cases such as churn prediction, demand forecasting, fraud detection, and dynamic pricing. Own the full ML lifecycle: from problem framing and data exploration to model training, validation, deployment, and monitoring. Champion best practices in experimentation, reproducibility, and responsible AI. Cross-Functional Leadership & Business Impact Partner with senior stakeholders across Sales, Customer Service, Finance, Supply Chain, and Fulfillment to define and prioritize ML initiatives aligned with strategic goals. Translate ambiguous business challenges into well-scoped ML solutions with measurable ROI. Serve as a technical advisor to executive leadership on AI/ML trends, risks, and opportunities. MLOps, Governance & Infrastructure Lead the design and implementation of robust MLOps pipelines using tools like DataRobot Ensure scalable, secure, and compliant deployment of models in cloud-native environments (AWS, Azure, GCP). Establish governance frameworks for model versioning, monitoring, retraining, and auditability. Data Engineering & Feature Platform Design Collaborate with data engineering teams to define and evolve enterprise-wide feature stores, data contracts, and real-time data pipelines. Drive innovation in feature engineering, leveraging domain knowledge and advanced statistical techniques. Mentorship, Collaboration & Thought Leadership Mentor junior ML engineers and data scientists, fostering a culture of technical excellence and continuous learning. Contribute to internal knowledge sharing, technical design reviews, and ML community engagement. Publish whitepapers, present at conferences, or lead internal workshops on emerging ML technologies. Qualifications: Required: 8-10 years of experience in machine learning product management, AI engineering, or applied data science, with a strong foundation in software engineering. Proven experience deploying and scaling ML models in production environments using modern MLOps practices. Deep understanding of cloud ML platform capabilities (DataRobot is preferred) Strong communication skills with the ability to influence technical and non-technical stakeholders. Preferred: Experience leading ML initiatives in enterprise domains such as Finance, Sales, or Supply Chain. Familiarity with advanced ML techniques (e.g., transformers, graph neural networks, time series forecasting, causal modeling). Exposure to enterprise platforms such as Salesforce, Oracle. Graduate degree (MS or PhD) in Computer Science, Machine Learning, Statistics, or a related field. Role & responsibilities
Posted Date not available
2.0 - 4.0 years
4 - 9 Lacs
chennai
Work from Office
Role & Responsibilities Conversational AI & Chatbot Development Design and implement intelligent chatbot solutions using modern LLMs like GPT-4, Claude, LLaMA, etc. Build prompt strategies, chains, and RAG (Retrieval-Augmented Generation) pipelines for enhanced context-aware responses. Integrate chatbot systems with APIs, databases, and enterprise applications. Optimize model usage to balance cost, latency, and accuracy. Develop fallback logic, session tracking, and context management for multi-turn conversations. Ensure secure, ethical, and responsible AI usage. AI/ML for Predictive Analytics Analyze historical and real-time data to identify trends, anomalies, and patterns. Work with structured and unstructured data including text, time series, and tabular data. Present actionable insights from models to stakeholders in clear and explainable ways. Architecture & Deployment Design and maintain scalable AI/ML pipelines and architecture. Use MLOps tools to automate model training, testing, deployment, and monitoring (e.g., MLflow, DVC, Airflow, Kubeflow). Integrate models into production-grade applications using APIs and cloud ML services. Qualifications 2-4 years of relevant experience in AI Engineering. Strong experience with LLM-based chatbot frameworks: LangChain, LlamaIndex, Langgraph, etc. Expertise in prompt engineering, embeddings, and transformer-based models. Proficiency in Python (preferred) or other languages for ML and scripting. Familiarity with data pipelines, feature stores, and ETL processes. Experience with vector databases (e.g., Pinecone, FAISS, Weaviate). Strong understanding of machine learning algorithms, statistics, and data modeling. Education- Bachelors/Masters degree in Computer Science, Data Science, AI/ML
Posted Date not available
7.0 - 12.0 years
20 - 35 Lacs
hyderabad
Work from Office
About this role: Wells Fargo is seeking a Principal AI Engineer. In this role, you will: Act as an advisor to leadership to develop or influence applications, network, information security, database, operating systems, or web technologies for highly complex business and technical needs across multiple groups Lead the strategy and resolution of highly complex and unique challenges requiring in-depth evaluation across multiple areas or the enterprise, delivering solutions that are long-term, large-scale and require vision, creativity, innovation, advanced analytical and inductive thinking Translate advanced technology experience, an in-depth knowledge of the organizations tactical and strategic business objectives, the enterprise technological environment, the organization structure, and strategic technological opportunities and requirements into technical engineering solutions Provide vision, direction and expertise to leadership on implementing innovative and significant business solutions Maintain knowledge of industry best practices and new technologies and recommends innovations that enhance operations or provide a competitive advantage to the organization Strategically engage with all levels of professionals and managers across the enterprise and serve as an expert advisor to leadership Required Qualifications: 7+ years of Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: This is an individual contributor role (Principal AI Engineer) which is at the level of Executive Director Own the end-to-end AI/ML lifecycledesign, build, deploy, and maintain scalable agent solutions for Technology Analytics. Embed Generative AI and Agentic AI into day-to-day business processes to lift efficiency and employee productivity. Execute and refine the AI roadmap, keeping every project tightly aligned with enterprise strategy and measurable KPIs. Design autonomous AI agents capable of real-time decision-making and action with minimal human oversight. Enforce best-in-class practices for model scalability, reliability, security, and ongoing maintenance. Lead R&D on emerging AI tech, scouting fresh use cases to keep Wells Fargo at the innovation forefront. Drive cross-division rollout of AI solutions, track impact, and deliver tangible cost- and time-savings. Serve as an AI thought leadermentor teams, champion best practices, and translate complex concepts for senior stakeholders. 7 years of Analytics, AI or Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education. 3+ years of experience in Generative AI tech like Retrieval-Augmented Generation (RAG) design end-to-end pipelines (Vector DBs, chunking, embedding, retrieval, context injection) to ground LLM responses on enterprise knowledge. Strong software development skills, particularly in Python, with experience working with AI frameworks (any one of Langchain, Langgraph, CrewAI, Autogen, Google ADK ) and tools in cloud environments; Knowledge of ML, NLP, Information Retrieval, Recommender Systems and LLMs. Must havedeployed role-based agent ( Multi Agent Orchestration ) teams with one of the listed frameworks Experience with container & orchestration technologies (Docker, Kubernetes etc) and cloud platforms (AWS, GCP or Azure); Experience with training, fine-tuning, and applying large language models (LLMs) for agentic AI application Deployment & MLOps - containerize inference endpoints with Docker/K8s, scale on AWS/GCP GPU instances, and automate CI/CD, canary releases, and rollback strategies. 5+ years of experience with NLP Tech/ NLP pipelines (NLTK, spaCy, Hugging Face, BERT/GPT) 5+ years of hands-on experience with Microsoft Power Platform (Power Apps, Power Automate, Power BI, Dataverse), including solution architecture, governance, and deployment. Working knowledge of enterprise tools like MS Copilot Studio and AgentSpace for Agentic AI, with a strong understanding of prompt engineering, grounding, and extensibility. Familiarity with Data Connectors and API gateways that support seamless communication between systems. General Analytics Data Visualization using Tableau, PowerBI. Strong knowledge of SQL (Structured Databases) Deep subject matter expertise in AI technologies, including but not limited to Copilot Studio, Azure AI Foundry, Google Agent Space, Google Gemini, Microsoft 365, and M365 Copilot. Traditional Data Science - classical ML and statistical modelinglinear/logistic regression, time-series ARIMA/Prophet, decision trees, random forests, gradient-boosting (XGBoost/LightGBM), SVMs, k-means and hierarchical clustering, association-rule mining (Apriori), and A/B testing with hypothesis-testing frameworks Ability to orchestrate processes across integrated systems to enable robust workflows and seamless operations. Ability to collaborate with cross-functional teams to align technical solutions with business goals and end-user needs. A strong focus on cybersecurity and risk management throughout the software development lifecycle. Passion for designing solutions that prioritize end-user experience and usability. Strong problem-solving skills and attention to detail in complex system design. Proficiency in designing and developing multi-agent systems where multiple AI agents collaborate to achieve complex tasks 7 Aug 2025 To request a medical accommodation during the application or interview process, visit . Wells Fargo maintains a drug free workplace. Please see our to learn more.
Posted Date not available
4.0 - 8.0 years
5 - 10 Lacs
kolkata
Remote
Architect and develop Agentic AI systems using open-source LLMs and frameworks Build lightweight edge agents and integrate them with central coordination agents Required Candidate profile • Collaborate with the IoT and ML teams to embed AI capabilities into operational environments • Optimise system performance across constrained and cloud environments
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |