Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 6.0 years
9 - 16 Lacs
Gurgaon/Gurugram, Delhi / NCR
Work from Office
We are seeking a talented AI Engineer to join our team. The ideal candidate will have expertise in machine learning, NLP, deep learning, and deploying AI models, with experience in advanced tools and frameworks across AI and backend development. Responsibilities: Develop and deploy AI models using machine learning , deep learning (CNNs, RNNs, LSTMs), and NLP ( GPT , BERT ). Work with advanced AI frameworks ( Langchain , AutoGen , Streamlit ) and backend frameworks ( FastAPI , Flask ). Implement reinforcement learning and autonomous decision-making systems. Leverage cloud platforms ( AWS: Bedrock, Sagemaker ) for model deployment and scalability. Integrate third-party services ( OpenAI , Azure Foundry ) and tools ( RelevanceAI , Huggingface ). Work with databases ( MongoDB , Postgres , Redis ) and distributed systems. Utilize advanced AI techniques like RAG , fine-tuning , LoRA , QLoRA , multi-model systems , and stable diffusion . Explore AI applications in video generation models ( text-to-video , image-to-video ). Skills Needed: Expertise in AI frameworks : Langchain , AutoGen , Streamlit . Strong experience with FastAPI , Flask , and cloud platforms ( AWS , Bedrock , Sagemaker ). Knowledge of third-party services ( OpenAI , Azure Foundry ) and tools ( RelevanceAI , Huggingface ). Solid experience with databases ( MongoDB , Postgres , Redis ). Proficient in machine learning , deep learning , NLP , and reinforcement learning . Hands-on experience with advanced AI techniques: RAG , fine-tuning , LoRA , QLoRA , SLM , LLM , stable diffusion , and video generation models .
Posted 3 weeks ago
1.0 years
2 - 5 Lacs
IN
Remote
About the job: Key responsibilities: 1. Build and refine LLM-powered agents using OpenAI, Groq or HuggingFace 2. Collaborate via Cursor for fast-paced 'vibe coding' workflows 3. Design frontend UIs in React/Next.js with Tailwind or ShadCN 4. Develop backends with FastAPI (Python) or Node.js 5. Integrate and experiment with Langchain/LangGraph/Vector DBs 6. Push MVPs rapidly and iterate efficiently Requirements: 1. Demonstrate 1-2 years of experience in full-stack or AI-focused development 2. Show prior experience using Cursor.dev for collaborative coding 3. Apply solid understanding of modern web stack (React, Next.js, FastAPI, Node.js) 4. Utilize familiarity with LLM APIs (OpenAI, HuggingFace, Groq) 5. Bonus: Experience with agent frameworks, chaining, and vector databases 6. Commit 3-5 hours daily during fixed, agreed-upon time slots 7. Seek to build portfolio projects or transition into an AI specialization Tech Stack We Use: 1. Frontend: React.js, Next.js, Tailwind, ShadCN 2. Backend: FastAPI (Python), Node.js 3. AI/LLM: OpenAI, Groq, HuggingFace, Langchain, LangGraph 4. Tools: Cursor.dev, Git, Vercel, Render, Docker, Pinecone/Weaviate What you'll get: 1. Own real projects and receive GitHub credit 2. Work closely with the founder and the small team 3. Maintain part-time flexibility alongside other commitments 4. Enjoy flexible hours with regular daily sync slots 5. Gain exposure to emerging AI agent technologies 6. Earn bonus pay/recognition for outstanding contributions Who can apply: Only those candidates can apply who: have minimum 1 years of experience are Computer Science Engineering students Salary: ₹ 2,00,000 - 5,00,000 /year Experience: 1 year(s) Deadline: 2025-06-21 23:59:59 Skills required: Python, Node.js, React Native, FastAPI, LangChain and Cursor (GenAI) About Company: TopSystems builds productivity-first software that helps solopreneurs and small teams punch far above their weight. Our forthcoming SaaS delivers an on-demand executive suite of AI Agents strategy, ops, finance, marketing, and more, so founders can plan, decide, and execute faster than ever. We iterate quickly, keep pricing founder-friendly, and obsess over real business outcomes. If you love building with the latest AI tools and want to ship features that truly matter, you'll feel right at home.
Posted 3 weeks ago
3 - 6 years
15 - 22 Lacs
Pune
Hybrid
About the role PubMatic is seeking Data Analytics-focused Senior Software Engineers with expertise in building and optimizing AI agents, including strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, with proficiency in programming languages such as Scala and Python. You will be responsible for developing advanced AI agents that enhance our data analytics capabilities, enabling our platform to handle complex information retrieval, contextual understanding, and adaptive interactions, ultimately improving our data-driven insights and analytical workflows. What You'll Do Build, design, and implement our highly scalable, fault-tolerant big data platform to process terabytes of data and provide customers with in-depth analytics. Develop backend services using Java, REST APIs, JDBC, and AWS. Build and maintain Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Architect and implement real-time data processing workflows and automation frameworks. Design and develop GenAI-powered agents for analytics, operations, and data enrichment use cases using frameworks like LangChain, LlamaIndex, or custom orchestration systems. Integrate LLMs (e.g., OpenAI, Claude, Mistral) into existing services for query understanding, summarization, and decision support. Manage end-to-end GenAI workflows including prompt engineering, fine-tuning, vector embeddings, and retrieval-augmented generation (RAG). Working closely with cross-functional teams on improving the availability and scalability of large data platforms and the functionality of PubMatic software. Participate in Agile/Scrum processes such as sprint planning, sprint retrospective, backlog grooming, user story management, and work item prioritization. Frequently discuss with product managers about the software features to include in the PubMatic Data Analytics platform. Support customer issues over email or JIRA (bug tracking system), provide updates, and patches to customers to fix the issues. Perform code and design reviews for code implemented by peers or as per the code review process. We'd Love for You to Have Three to six years of coding experience in Java and backend development. Solid computer science fundamentals, including data structure and algorithm design, and creation of architectural specifications. Expertise in developing the Implementation of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews. Hands-on experience with Big Data tools and systems like Scala Spark, Kafka, Hadoop, and Snowflake. Proven expertise in building GenAI applications, including: LLM integration (OpenAI, Anthropic, Cohere, etc.) LangChain, or similar agent orchestration libraries Prompt engineering, embeddings, and retrieval-based generation (RAG) Experience in developing and deploying scalable, production-grade AI or data systems. Ability to lead end-to-end feature development and debug distributed systems. Experience in developing and delivering large-scale big data pipelines, real-time systems & data warehouses would be preferred. Demonstrated ability to achieve stretch goals in a very innovative and fast-paced environment. Demonstrated ability to learn new technologies quickly and independently. Excellent verbal and written communication skills, especially in technical communications. Strong interpersonal skills and a desire to work collaboratively. Qualifications Should have a bachelor's degree in engineering (CS / IT) or equivalent degree from a well-known institute/university. Apply to the link below: https://pubmatic.com/job/?gh_jid=4692032008
Posted 4 weeks ago
1 - 4 years
6 - 10 Lacs
Bengaluru
Work from Office
What Youll Own Full Stack Systems: Architect and build end-to-end applications using Flask, FastAPI, Node.js, React (or Next.js), and Tailwind. AI Integrations: Build and optimize pipelines involving LLMs (OpenAI, Groq, LLaMA), Whisper, TTS, embeddings, RAG, LangChain, LangGraph, and vector DBs like Pinecone/Milvus. Cloud Infrastructure: Deploy, monitor, and scale systems on AWS/GCP using EC2, S3, IAM, Lambda, Kafka, and ClickHouse. Real-time Systems: Design asynchronous workflows (Kafka, Celery, WebSockets) for voice-based agents, event tracking, or search indexing. System Orchestration: Set up scalable infra with autoscaling groups, Docker, and Kubernetes (PoC ready, if not full prod). Growth-Ready Features: Implement in-app nudges, tracking with Amplitude, AB testing, and funnel optimization. Must-Haves: 1+ years of experience building production-grade full-stack systems Fluency in Python and JS/TS (Node.js, React) shipping independently without handholding Deep understanding of LLM pipelines, embeddings, vector search, and retrieval-augmented generation (RAG) Experience with AR frameworks (ARKit, ARCore), 3D rendering (Three.js), and real-time computer vision (MediaPipe) Strong grasp of modern AI model architectures: Diffusion Models, GANs, AI Agent Hands-on with system debugging, performance profiling, infra cost optimization Comfort with ambiguity fast iteration, shipping prototypes, breaking things to learn faster Bonus if youve built agentic apps, AI workflows, or virtual try-ons
Posted 1 month ago
9 - 14 years
40 - 45 Lacs
Bengaluru
Remote
Role & responsibilities We are looking for Data Scientist position with US MNC company for permanent role, Remote. Preferred candidate profile Mandatory Skills : 5 Years + relevant experience in data science. Experience in RAG implementation pipeline. Experience with AI/ML Frameworks like TensorFlow, PyTorch, or similar. Proficiency in Programming Languages such as Python, SQL, and other relevant languages. Experience in the Financial Services Industry and an understanding of compliance standards. (Preferred) Experience with Master data management, Data Wrangling and ETL Processes. Experience with data and AI/ML Technologies: Such as NLP/NLU, Azure Cognitive Services, Azure Synapse Analytics, Azure data bricks and Azure ML service. Certification in Data Science or AI/ML.
Posted 1 month ago
7 - 12 years
9 - 14 Lacs
Hyderabad, Gurugram
Work from Office
The Service Enablement team is responsible for providing products and solutions that enable better and higher quality service delivery across the organization. Their primary focus is on facilitating the successful implementation and integration of these products, ensuring that they align with business needs and enhance the overall user experience. This involves offering comprehensive training and support to users, gathering feedback to drive continuous improvement, and optimizing processes to maximize efficiency. The Impact : Are you looking for an opportunity to advance your career as an innovative enterprise leader? The Platform & Tools team is seeking for an innovative professional who can bring leadership, creativity, and Product Management experience to a global team. Whats in it for you : As a Product Leader, you'll spearhead AI innovations and advance your career in enterprise leadership. You'll engage with cutting-edge technologies and gain valuable product management experience. Contribute to transformative initiatives that redefine the future of service delivery. Responsibilities : In your daily role, you will drive in the innovation, development, implementation, and adoption of product strategies that align with organizational goals. You will evaluate industry-leading technologies, conduct analyses to identify value-driven solutions, and monitor product performance. Your responsibilities will also include mentoring team members, facilitating training, and ensuring timely delivery of high-quality products. By promoting a culture of innovation, you will support the adoption of AI technologies and contribute to the success of the Service Enablement team. Product Leadership & Strategy Define and own the product vision, strategy, and roadmap for Service Enablement products, ensuring alignment with organizational goals and customer needs. Identify opportunities to leverage AI and intelligent workflows to streamline business operations and enhance user productivity. Guide cross-functional teams in delivering impactful and scalable products that meet market demands. AI and Emerging Technology Integration Evaluate and integrate cutting-edge AI capabilities, including large language models (LLMs), autonomous agents, machine learning workflows, and AI-driven decision-making frameworks. Collaborate with AI/ML research and engineering teams to develop innovative features that transform service delivery and support models. Stay updated on trends in AI and productivity platforms to apply relevant technologies. Customer & Market Insight Develop an understanding of user personas and pain points to drive effective product design. Conduct market research and competitive analysis to ensure product differentiation. Analyze product usage data and customer feedback to optimize features and outcomes. Execution & Delivery Support product strategy planning, prioritization, and execution throughout the product lifecycle. Collaborate with enterprise stakeholders to ensure alignment and success. Contribute to product OKRs and continuous improvement through agile practices. Team Leadership Mentor and develop a high-performing team of product managers. Foster a culture of innovation, accountability, and customer-centric thinking within the product organization. Qualifications: Over 7 years of product management experience or similar roles, with a proven track record of success. Experience in leading enterprise platforms and initiatives. Strong understanding of customer and market dynamics within the service enablement discipline. Customer-focused mindset with a history of delivering impactful solutions. Knowledge of AI technologies, including generative AI and intelligent workflow systems. Experience working in agile environments with cross-functional teams. Excellent leadership, communication, and stakeholder management skills. Bachelors or Masters degree in Computer Science, Engineering, Business, or a related field. Willingness to work flexible hours to meet business needs. Preferred Qualifications SAFe Certification. Experience with Service Management platforms such as Service Now, Jira Service Management, Moveworks, Aisera, etc. Familiarity with AI frameworks and tools such as OpenAI, LangChain, or AutoGPT. Proficient in measuring impact with a data-driven approach.
Posted 1 month ago
7 - 10 years
9 - 12 Lacs
Hyderabad
Work from Office
Minimum of 5 years of experience in Python application development. Strong proficiency in Python programming language and its ecosystem. Solid understanding of software development principles, data structures, and algorithms. Experience with web frameworks such as Angular or React. Proficiency in relational databases (e.g., PostgreSQL, MySQL) and ORM libraries. Experience with version control systems such as Git. Strong problem-solving skills and ability to think algorithmically. Excellent communication and interpersonal skills. Ability to work effectively both independently and as part of a team in a fast-paced environment. Experience with cloud platforms (e.g., AWS ) and containerization technologies (e.g., Docker, Kubernetes) is a plus. Familiarity with Agile development methodologies. Familiarity with ML/chatbot applications development, LangChain, OpenAI or any other LLM models
Posted 1 month ago
8 - 10 years
25 - 40 Lacs
Bengaluru
Remote
Job description Technical Lead (Enterprise AI Systems) We are seeking a Technical Lead to drive the design and development of our enterprise-grade AI product. This role requires a deep understanding of AI/ML systems, cloud-native architectures, and scalable software design. The applicant should have mandatorily worked on enterprise grade AI product or project for at least 2 - 3 years. This role includes at least 50% hands on development, 30% LLD and the remaining 20% of HLD / Architecture. The role is on the technical career path and is NOT suitable for project managers, technical managers, engineering managers etc. Key Responsibilities: Develop scalable, modular, and secure AI-driven software systems. Design APIs, microservices, and cloud infrastructure for AI workloads. Ensure seamless AI model integration and efficient scaling. Optimize performance & scalability of systems. Implement caching, load balancing, and distributed computing. Ensure security compliance (GDPR, SOC2) and secure data handling. Enforce best practices in authentication, encryption, and API security. Mandatory Skills & Qualifications: Experience: 8 - 10 years in software architecture & design, with at least 3+ years focused on AI/ML product design & development. Last 1 - 2 years should include deep Gen AI experience in enterprise grade AI projects or applications. Experience with design and developing applications with LLMs, RAG Applications, Gen AI Agents. Fine-tuning open-source LLMs. Experience with deploying and monitoring LLMs. Experience with LLM evaluation frameworks or tools like DeepEval, Langfuse or similar. Experience with frameworks like langchain, llamaindex, pydantic etc. Proficiency in RDBMS like PostgreSQL and knowledge of vector databases (eg: chromadb,Pinecone, FAISS, Weaviate). Strong background in microservices, RESTful & GraphQL APIs, event-driven architectures, Working experience with Kafka, RabbitMQ etc... Proficient in Python, Java for backend development Experience with TensorFlow, PyTorch, Hugging Face etc... Skilled in data processing (NumPy, Pandas, Dask,NLTK) and front-end (TypeScript, React.js). Experience in scalable AI systems, data pipelines, MLOps, and real-time inference. Strong understanding of security protocols, IAM, OAuth2, JWT, RBAC. Expertise in cloud platforms (AWS, GCP, Azure), Kubernetes, Docker. Hands-on experience with CI/CD (Eg: github,Jenkins etc...) Hands-on with system monitoring (Eg: ELK Stack,OpenTelemetry, Prometheus, Grafana). Nice-to-Have Skills: Ensuring AI model interpretability, fairness, and bias mitigation strategies. Optimizing model inference using quantization, distillation, and pruning techniques. Experience in deploying AI models in production at scale, including MLOps best practices. Designing AI systems with privacy-preserving techniques (differential privacy, homomorphic encryption etc.). Experience with knowledge graph-based AI applications.
Posted 1 month ago
3 - 5 years
15 - 25 Lacs
Hyderabad
Remote
Job Title: AI Developer Supply Chain Management Location: Permanent Remote Job Type: Full-time Team: Product AI Engineering We are looking for a talented LLM Developer to join our AI Engineering team and contribute to the development of the SCM Expert AI Agent the core brain behind AI capabilities like demand forecasting, anomaly detection, ETA prediction, and route optimization in the FarmToPlate (F2P) platform. In this role, you will focus on integrating and optimizing LLM-based agentic architectures using frameworks like LangChain, CrewAI, and RAG pipelines. Youll work closely with traditional ML developers and backend engineers to deploy intelligent, explainable, and real-time decision agents for supply chain management. Responsibilities: Design, build, and optimize LLM-based agents using LangChain, CrewAI, or custom wrappers. Implement RAG pipelines with vector stores (e.g., FAISS, ChromaDB) and embedding models. Develop LLM-powered tools for demand forecasting, query understanding, anomaly explanations, and summarization. Integrate agents with internal and external data sources using tools like MongoDB, Sklearn datasets, or Browsing Agents. Implement prompt engineering, agent memory, and context-chaining for more dynamic agent behavior. Collaborate with data science and product teams to translate SCM logic into agentic workflows. Expose models and agents through APIs using FastAPI, including role-based access if needed. Optimize runtime performance with quantization, PEFT (parameter-efficient finetuning), and model caching. Track experiments, versions, and deployment artifacts using tools like MLflow, SageMaker, or custom registries. Requirements: 3-5 years of experience working on Python-based AI systems. Strong understanding of LLMs, Transformers, Prompt Engineering, and conversational AI. Hands-on experience with LangChain, CrewAI, OpenAI/GPT APIs, or Hugging Face Transformers. Familiarity with RAG concepts, vector databases (FAISS, Chroma), and embedding techniques. Good knowledge of API frameworks (FastAPI/Flask) and working with JSON schema-driven inputs. Basic understanding of traditional ML models and workflows (e.g., regression, classification, anomaly detection). Comfortable integrating external data tools and APIs into LLM pipelines. Bonus Points For Experience building Agentic RAG systems that combine logic, tools, memory, and models. Knowledge of Open Source LLM tuning using PEFT, LoRA, bitsandbytes, Unsloth, etc. Familiarity with SageMaker, EKS, or deploying custom models to the cloud. Experience in supply chain/logistics, or working with ERP/SCM structured data. Built monitoring dashboards using Evidently.ai or similar model observability tools. Why Join Us Be part of a domain-driven AI platform blending supply chain expertise with modern agentic LLM architectures. Work on real-world use cases that impact farm-to-plate food systems at scale. Collaborate with a fast-moving, cross-functional team that values innovation, ownership, and outcome. Learn and grow in a modular, full-stack AI environment where your ideas become deployed APIs. Enjoy flexibility in remote, experimentation, and continuous learning.
Posted 1 month ago
2 - 5 years
8 - 12 Lacs
Pune
Work from Office
About the job: The Red Hat, Experience Engineering (XE) team is looking for a skilled Python Developer with 2+ years of experience to join our Software Engineering team. In this role, the ideal candidate should have a strong background in Python development, a deep understanding of LLMs, and the ability to debug and optimize AI applications. Your work will directly impact our product development, helping us drive innovation and improve the customer experience. What will you do? Develop and maintain Python-based applications, integrating LLMs and AI-powered solutions. Collaborate with cross-functional teams (product managers, software engineers, and data teams) to understand requirements and translate them into data-driven solutions. Assist in the development, testing, and optimization of AI-driven features. Optimize performance and scalability of applications utilizing LLMs. Debug and resolve Python application errors, ensuring stability and efficiency. Conduct exploratory data analysis and data cleaning to prepare raw data for modelling. Optimize and maintain data storage and retrieval systems for model input/output. Research and experiment with new LLM advancements and AI tools to improve existing applications. Document workflows, model architectures, and code to ensure reproducibility and knowledge sharing across the team. What will you bring? Bachelor's degree in Computer Science, Software Engineering, or a related field with 2+ years of relevant experience. Strong proficiency in Python, including experience with frameworks like FastAPI/ Flask, or Django. Understanding of fundamental AI/ML concepts, algorithms, techniques and implementation of workflows. Familiarity with DevOps/MLOps practices and tools for managing the AI/ML lifecycle in production environments. Understanding of LLM training processes and data requirements. Experience in LLM fine-tuning, RAG and prompt engineering. Hands-on experience with LLMs (e.g., OpenAI GPT, Llama, or other transformer models) and their integration into applications(e.g. LangChain or Llama Stack). Familiarity with REST APIs, data structures, and algorithms. Strong problem-solving skills with the ability to analyze and debug complex issues. Experience with Git, CI/CD pipelines, and Agile methodologies. Experience working with cloud-based environments (AWS, GCP, or Azure) is a plus. Knowledge of vector databases (e.g., Pinecone, FAISS, ChromaDB) is a plus.
Posted 1 month ago
4 - 9 years
7 - 17 Lacs
Mumbai
Work from Office
Job Description & Responsibilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams. Qualification, Skills & Experience Good hands-on experience in Advanced Python Programming Experiences in scalable Microservice Architecture and Development Good understanding of API Development, Messaging Queues and Asynchronous Task Management Basic DevOp skills and working with Docker Experience with LangChain, LangGraph, AutoGen or CrewAI are a big plus Ability to co-work in a small team with different locations Ability to learn and understand a new Python framework
Posted 1 month ago
4 - 9 years
7 - 17 Lacs
Mumbai
Work from Office
We are seeking a proficient AI/ML Engineer responsible for testing, deploying, and hosting various Large Language Models (LLMs). The role involves monitoring deployed agentic services, ensuring their optimal performance, and staying abreast of the latest advancements in AI technologies. Key Responsibilities: LLM Deployment & Hosting Evaluate, test, and deploy diverse LLMs on cloud-based and on-premise infrastructures. Optimize model performance for scalability and efficiency. Implement secure API endpoints and model-serving pipelines. Strong CI/CD skills and knowledge of CI/CD tools like Jenkins, GitHub Actions, CircleCI etc. Agentic AI Services Deploy and maintain AI agents using frameworks such as CrewAI, Agnos(Phi Data), AutoGen etc. Integrate LLMs into business workflows and automation tools. Design, monitor, and enhance agentic services for real-world applications. Monitoring & Optimization Utilize advanced observability tools to monitor model performance, latency, and cost efficiency. Implement tools like AgentOps, OpenLIT, Langfuse, and Langtrace for comprehensive monitoring and debugging of LLM applications. Develop logging, tracing, and alerting systems for deployed models and AI agents. Conduct A/B testing and gather user feedback to refine AI behavior. Address model drift and retrain models as necessary. Best Practices & Research Stay updated with the latest advancements in AI, LLMs, and agentic systems. Implement best practices for prompt engineering, reinforcement learning from human feedback (RLHF), and fine-tuning methodologies. Optimize compute costs and infrastructure usage for AI applications. Collaborate with researchers and ML engineers to integrate state-of-the-art AI techniques. Strong knowledge on version control tools like GitHub Qualifications & Skills: Proficiency with LLM frameworks such as Hugging Face Transformers, OpenAI API, or Meta AI models. Strong programming skills in Python and experience with deep learning libraries (PyTorch, TensorFlow, JAX). Experience with cloud platforms (AWS, Azure, GCP) and model deployment tools (Docker, Kubernetes, FastAPI, Ray Serve). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG) techniques. Hands-on experience with monitoring tools such as AgentOps, OpenLIT, Langfuse, and Langtrace. Understanding of prompt engineering, LLM fine-tuning, and agent-based automation. Excellent problem-solving skills and the ability to work in a dynamic AI research and deployment team. Preferred Qualifications: Experience in reinforcement learning, fine-tuning LLMs, or training custom models. Knowledge of security best practices for AI applications. Contributions to open-source AI/ML projects or research publications in the field.
Posted 1 month ago
5 - 10 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Chennai
Work from Office
We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.
Posted 1 month ago
2 - 7 years
4 - 9 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Hybrid
Roles and Responsibilities Design, develop, test, and maintain backend systems using Python and other technologies. Develop algorithms for data processing and analysis using open APIs and Langchain. Ensure system architecture meets performance, scalability, and reliability standards. Collaborate with cross-functional teams to identify requirements and implement solutions. Troubleshoot issues related to system design and implementation. Desired Candidate Profile 2-7 years of experience in software development with a focus on backend engineering. B.Tech/B.E. degree from any specialization (preferred). Strong understanding of algorithm development, problem-solving skills, teamwork abilities, and proficiency in Open API.
Posted 1 month ago
2 - 5 years
15 - 18 Lacs
Pune
Work from Office
Summary of the Role: We are seeking an AI Prompt Engineer with expertise in Large Language Models (LLMs) and prompt engineering to support legal tech solutions, specifically in contract lifecycle management and contract-related AI implementations. The ideal candidate will have a strong background in AI, NLP, and structured prompt engineering, with a keen understanding of legal tech applications. Prior experience in the legal domain is a plus but not mandatory. What you will do: Design, develop, and refine AI prompts for legal tech applications, ensuring accuracy and contextual relevance in contract-related workflows. Work with Large Language Models (LLMs) to enhance AI-driven solutions for contract analysis, review, and automation Optimize prompt structures and techniques to improve AI-generated outputs in contract drafting, compliance, and negotiations Research, test, and implement best practices in prompt engineering to maximize efficiency and accuracy in contract delivery Evaluate and refine AI-generated legal documents to align with compliance standards and client requirements Stay updated with advancements in LLMs, prompt engineering methodologies, and AI-driven legal tech solutions. What you bring: Bachelor's or Master's degree in Computer Science, AI, Data Science, or a related field. Minimum 5 years of overall professional experience, with at least 2 years in AI prompt engineering Strong understanding of LLMs (GPT, Claude, Gemini, Llama, etc.) and their application in legal or enterprise use cases Proven expertise in prompt design, optimization, and fine-tuning for AI-driven text generation Hands-on experience with AI tools, frameworks, and APIs (OpenAI, Hugging Face, LangChain, etc.) Strong problem-solving skills and ability to work in a fast-paced environment Excellent communication and collaboration skills to work with cross-functional teams. Bonus Points: Basic understanding of contracts, legal terminology, and compliance standards. Applications must be submitted exclusively through Execo's official job postings located on the following platforms: Execo Careers Website: https://www.execo.com/careers LinkedIn: https://www.linkedin.com/company/execogroup/jobs/ Indeed: US & Kenya: https://www.indeed.com/cmp/Execo-Group-Inc India: https://in.indeed.com/cmp/Execo-Group-Inc UK: https://uk.indeed.com/cmp/Execo-Group-Inc Philippines: https://ph.indeed.com/cmp/Execo-Group-Inc Singapore: https://sg.indeed.com/cmp/Execo-Group-Inc Naukri: https://www.naukri.com/
Posted 1 month ago
5 - 10 years
18 - 33 Lacs
Gurugram
Remote
Design and build production-grade Python APIs, integrate LLMs using LangChain/LangGraph, and develop backend systems. Requires 5+ yrs in backend dev, strong Python, hands-on LLM integration, and experience with async workflows and deployment.
Posted 1 month ago
- 1 years
0 Lacs
Hyderabad
Hybrid
Position Overview: We are seeking highly motivated and innovative AI Agent Development Interns t. You will work with our AI and Oracle Cloud teams to build and prototype AI agents using Large Language Models (LLMs) integrated with Oracle Fusion Applications . This internship is ideal for students who want to work on cutting-edge AI use cases with direct business impact in enterprise environments. Key Responsibilities: Design and build AI agents using LLMs for Oracle Fusion modules: HCM, ERP, SCM, and Risk Management Integrate AI agent flows with Oracle Cloud APIs and datasets Collaborate with Oracle Fusion ERP, SCM, HCM & GRC specialists, data engineers, and cloud developers to refine models and workflows Actively participate in the entire project lifecycle from conceptualization, requirements gathering, and design, through to coding, rigorous testing, iterative refinement, and documentation Document your approach, challenges, and learning outcomes Qualifications: Currently pursuing a Bachelor's or Masters degree in Computer Science, Artificial Intelligence, Data Science, or related fields Strong proficiency in Python, APIs, and LLM frameworks (e.g., OpenAI, LangChain, HuggingFace) Familiarity with Oracle Fusion Cloud or interest in enterprise applications is a strong plus Understanding of prompt engineering, embeddings, and agent orchestration Strong problem-solving skills, curiosity, and eagerness to learn new technologies Preferred Skills: Experience with tools like LangChain, VectorDB, or Retrieval-Augmented Generation (RAG) Familiarity with REST APIs, JSON, and webhooks Exposure to enterprise IT or cloud platforms (OCI, AWS, Azure, etc.) What You Will Gain: Real-world experience building AI applications for enterprise use Exposure to Oracle’s enterprise ecosystem and industry-leading risk solutions Mentorship from cloud, AI, ERP, SCM, HCM and GRC experts Opportunity to present your work in internal demos and client-facing forums Certificate of Completion and potential full-time offer Role & responsibilities
Posted 1 month ago
12 - 22 years
50 - 55 Lacs
Hyderabad, Gurugram
Work from Office
Job Summary Director, Collection Platforms and AI As a director, you will be essential to drive customer satisfaction by delivering tangible business results to the customers. You will be working for the Enterprise Data Organization and will be an advocate and problem solver for the customers in your portfolio as part of the Collection Platforms and AI team. You will be using communication and problem-solving skills to support the customer on their automation journey with emerging automation tools to build and deliver end to end automation solutions for them. Team Collection Platforms and AI Enterprise Data Organizations objective is to drive growth across S&P divisions, enhance speed and productivity in our operations, and prepare our data estate for the future, benefiting our customers. Therefore, automation represents a massive opportunity to improve quality and efficiency, to expand into new markets and products, and to create customer and shareholder value. Agentic automation is the next frontier in intelligent process evolution, combining AI agents, orchestration layers, and cloud-native infrastructure to enable autonomous decision-making and task execution. To leverage the advancements in automation tools, its imperative to not only invest in the technologies but also democratize them, build literacy, and empower the work force. The Collection Platforms and AI team's mission is to drive this automation strategy across S&P Global and help create a truly digital workplace. We are responsible for creating, planning, and delivering transformational projects for the company using state of the art technologies and data science methods, developed either in house or in partnership with vendors. We are transforming the way we are collecting the essential intelligence our customers need to do decision with conviction, delivering it faster and at scale while maintaining the highest quality standards. What were looking for ? You will lead the design, development, and scaling of AI-driven agentic pipelines to transform workflows across S&P Global. This role requires a strategic leader who can architect end-to-end automation solutions using agentic frameworks, cloud infrastructure, and orchestration tools while managing senior stakeholders and driving adoption at scale. A visionary technical leader with knowledge of designing agentic pipelines and deploying AI applications in production environments. Understanding of cloud infrastructure (AWS/Azure/GCP), orchestration tools (e.g., Airflow, Kubeflow), and agentic frameworks (e.g., LangChain, AutoGen). Proven ability to translate business workflows into automation solutions, with emphasis on financial/data services use cases. An independent proactive person who is innovative, adaptable, creative, and detailed-oriented with high energy and a positive attitude. Exceptional skills in listening to clients, articulating ideas, and complex information in a clear and concise manner. Proven record of creating and maintaining strong relationships with senior members of client organizations, addressing their needs, and maintaining a high level of client satisfaction. Ability to understand what the right solution is for all type of problems, understanding and identifying the ultimate value of each project. Operationalize this technology across S&P Global, delivering scalable solutions that enhance efficiency, reduce latency, and unlock new capabilities for internal and external clients. Exceptional communication skills with experience presenting to C-level executives Responsibilities Engage with the multiple client areas (external and internal) and truly understand their problem and then deliver and support solutions that fit their needs. Understand the existing S&P Global product to leverage existing products as necessary to deliver a seamless end to end solution to the client. Evangelize agentic capabilities through workshops, demos, and executive briefings. Educate and spread awareness within the external client-base about automation capabilities to increase usage and idea generation. Increase automation adoption by focusing on distinct users and distinct processes. Deliver exceptional communication to multiple layers of management for the client. Provide automation training, coaching, and assistance specific to a users role. Demonstrate strong working knowledge of automation features to meet evolving client needs. Extensive knowledge and literacy of the suite of products and services offered through ongoing enhancements, and new offerings and how they fulfill customer needs. Establish monitoring frameworks for agent performance, drift detection, and self-healing mechanisms. Develop governance models for ethical AI agent deployment and compliance. Preferred Qualification 12+ years work experience with 5+ years in the Automation/AI space Knowledge of: Cloud platforms (AWS SageMaker, Azure ML; etc) Orchestration tools (Prefect, Airflow; etc) Agentic toolkits (LangChain, LlamaIndex, AutoGen) Experience in productionizing AI applications. Strong programming skills in python and common AI frameworks Experience with multi-modal LLMs and integrating vision and text for autonomous agents. Excellent written and oral communication in English Excellent presentation skills with a high degree of comfort speaking with senior executives, IT Management, and developers. Hands-on ability to build quick prototype/visuals to assist with high level product concepts and capabilities. Experience in deployment and management of applications utilizing cloud-based infrastructure. A desire to work in a fast-paced and challenging work environment Ability to work in a cross functional, multi geographic teams
Posted 1 month ago
3 - 8 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
equired Skills & Qualifications Education & Experience: Bachelors or Master’s degree in Computer Science, Data Science, or a related field. 3–5 years of hands-on experience developing AI or data-driven applications in Python. Technical Expertise: LangChain: Proficiency in designing and implementing RAG pipelines. AWS Bedrock/ Azure OpenAI: Familiarity with integrating large language models. Pydantic: Solid understanding of data validation and configuration management. LangGraph: Experience with workflow orchestration and state management. FastAPI & React: Experience building RESTful APIs and integrating front-end frameworks. AWS: Practical knowledge of cloud deployment, CI/CD, and infrastructure management. Pinecone: Familiarity with hybrid retrieval techniques (sparse + dense), similarity search configurations, and metadata filtering. Soft Skills: Excellent problem-solving abilities and attention to detail. Strong communication skills, with the ability to explain complex AI concepts to non- technical stakeholders. A team player who thrives in a collaborative environment. Experience with Docker or Kubernetes for containerization and orchestration. Knowledge of MLOps tools and practices (e.g., MLflow, Airflow, etc.). Familiarity with other vector databases or vector search engines.
Posted 1 month ago
3 - 6 years
8 - 18 Lacs
Chennai
Hybrid
Job Description: We are looking for a talented and experienced Senior Data Scientist with a minimum of 4 years of professional experience in model development and expertise in Gen AI projects. As a Senior Data Scientist, you will be responsible for developing advanced machine learning models, conducting exploratory data analysis (EDA), performing feature selection/reduction, and utilizing cutting-edge technologies to deliver high-quality solutions. The ideal candidate should possess strong programming proficiency in Python and experience with cloud platforms like GCP or equivalent, as well as visualization tools like Qlik Sense, Power BI, Looker Studio, or Tableau. Job Responsibility: Data Scientists 3+ Year (Mandatory Skills) Model development (regression and classification), should have strong experience in performing EDA, Feature selection/Feature Reduction, building models and evaluate its performance Should have worked on Gen AI project Strong programming proficiency in python with mastery of data science libraries Pandas, numpy and scikitlearn & xgboost or pytorch GCP (BQ, Vertex AI) or other equivalent cloud Visualization (Qlik Sense / Looker Studio / Power BI / Tableau) Skills : GenAI & RAG Model development (regression and classification) Machine Learning Python GCP / other equivalent cloud Visualization LLM Model building We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Interview Date: 17th May 2025 (Saturday), Face to Face In person Interview Chennai.
Posted 1 month ago
3 - 5 years
25 - 40 Lacs
Noida
Work from Office
Hi, We are hiring for ML Developer. Please find the attached JD. Key Responsibilities Design, develop, and optimize machine learning models for various business applications. Build and maintain scalable AI feature pipelines for efficient data processing and model training. Develop robust data ingestion, transformation, and storage solutions for big data. Implement and optimize ML workflows, ensuring scalability and efficiency. Monitor and maintain deployed models, ensuring performance, reliability, and retraining when necessary Preferred candidate profile Qualifications and Experience Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3.5 to 5 years of experience in machine learning, deep learning, or data science roles. Proficiency in Python and ML frameworks/tools such as PyTorch, Langchain Experience with data processing frameworks like Spark, Dask, Airflow and Dagster Hands-on experience with cloud platforms (AWS, GCP, Azure) and ML services. Experience with MLOps tools like ML flow, Kubeflow Familiarity with containerisation and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to work in a fast-paced environment. Strong communication and collaboration skills.
Posted 1 month ago
4 - 9 years
30 - 36 Lacs
Pune
Work from Office
Responsibilities: * Design generative AI solutions using LLM, Python, SQL & NLP techniques. * Collaborate with cross-functional teams on project delivery. * Implement RNN, LSTM, GRU models for text analysis.
Posted 1 month ago
4 - 8 years
8 - 18 Lacs
Hyderabad
Work from Office
Job description Job description Job Summary: We are looking for a results-driven and innovative Data Scientist with 46 years of experience in data analysis, machine learning, and product optimization. The ideal candidate will have a strong foundation in Python, SQL, and cloud services, along with practical exposure to GenAI, LLMs, and MLOps frameworks. You will be responsible for building scalable data pipelines, developing machine learning models, and solving real-world business problems with data-driven solutions. Key Responsibilities: Design and deploy LLM-powered solutions (e.g., RAG, LangChain, Vector DB) to enhance business processes. Build and fine-tune traditional ML models (Random Forest, Decision Trees) for predictive analytics. Optimize LLM performance using LoRA fine-tuning and post-training quantization. Develop and deploy containerized AI applications using Docker and FastAPI. Collaborate with agents like CrewAI and LangSmith to automate document processing and data extraction workflows. Implement Python & SQL-based ETL pipelines for real-time data ingestion. Design dashboards and KPI monitoring tools using Metabase to enable data-driven decision-making. Create data consumption triggers and automate reporting for international stakeholders. Moderate large-scale live virtual data science classes and provide operational support. Required Skills: Programming: Python, SQL, FastAPI ML & AI: Random Forest, Decision Trees, Clustering, PCA, DL, NLP, Transformers, Gen AI Frameworks/Tools: LangChain, MLflow, Kubeflow, CrewAI, LangSmith DevOps: Docker, Git Databases: MySQL, PostgreSQL, MongoDB Cloud: AWS (S3, EC2) Visualization: Metabase Other: Experience with LLM fine-tuning and quantization Role & responsibilitiesRole & responsibilities Preferred candidate profile
Posted 1 month ago
4 - 9 years
9 - 19 Lacs
Navi Mumbai
Hybrid
Hexaware is conducting Walkin Interview for Data Scientist (Agentic AI) _Navi Mumbai Location_ 18th May 25 (Sunday) We urgently looking for Immediate joiners/Early joiners. Interested Candidates can share CV at umaparvathyc@hexaware.com Experience range- 3yrs to 19yrs Notice period- 15 days/30days Max Interview Location (Face to Face)- Navi Mumbai Open Positions: 1. AI Engineer -3+yrs 2. Lead Agentic AI Developer -5+yrs 3. Data scientist Architect- 10_yrs MUST HAVE: Agentic AI, LLM, Advance RAG, NLP, transformer model, LangChain Technical Skill: 1. Knowledge of Agentic AI concepts and applications 2. Strong Experience in Data Scientist (GENAI) 3. Proficiency with Generative AI models like GANs, VAEs, and transformers 4. Expertise with cloud platforms (AWS, Azure, Google Cloud) for deploying AI models 5. Strong Python Fast API experience, SDA based implementations for all the APIs
Posted 1 month ago
2 - 4 years
4 - 6 Lacs
Pune
Work from Office
We are seeking a talented and motivated AI Engineers to join our dynamic team and contribute to the development of next-generation AI/GenAI based products and solutions. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities: Software Development: Write clean, maintainable, and efficient code or various software applications and systems. GenAI Product Development: Participate in the entire AI development lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.Assist in researching and experimenting with state-of-the-art generative AI techniques to improve model performance and capabilities. Design and Architecture: Participate in design reviews with peers and stakeholders Code Review: Review code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines Testing: Build testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and Troubleshooting: Triage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and Quality: Contribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops Model: Understanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. Documentation: Properly document new features, enhancements or fixes to the product, and also contribute to training materials. Basic Qualifications: Bachelors degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency as a developer using Python, FastAPI, PyTest, Celery and other Python frameworks. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications: Experience with object-oriented programming, concurrency, design patterns, and REST APIs. Experience with CI/CD tooling such as Terraform and GitHub Actions. High level familiarity with AI/ML, GenAI, and MLOps concepts. Familiarity with frameworks like LangChain and LangGraph. Experience with SQL and NoSQL databases such as MongoDB, MSSQL, or Postgres. Experience with testing tools such as PyTest, PyMock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as VertexAI, BigQuery, GKE, GCS, DataFlow, and Kubeflow. Experience with Docker and Kubernetes. Experience with Java and Scala a plus.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2