Jobs
Interviews

129 Langgraph Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be working with a large-scale/global company located in Viman Nagar, Pune, requiring 5-8 years of experience in software/application development. As a part of your role, you will lead scalable, high-performance application architecture, develop enterprise-grade applications, manage Azure DevOps processes, and conduct solution design and RCA documentation while interacting with cross-functional teams. The ideal candidate should have a Graduation in Computers/Electronics or Post-Graduation in Computer Science with 5-8 years of experience in software/application development. Mandatory technical skills include proficiency in Python, FastAPI, React/TypeScript, Langchain, LangGraph, AI Agents Docker, Azure Open AI, and Prompt Engineering. Knowledge in AWS (Secrets Manager, IAM, ECS/EC2), Azure AD, Azure DevOps, GitHub, MongoDB (Motor, Beanie ODM), Redis, OAuth2/SAML, JWT, Azure AD integration, and audit logging is required. Additionally, soft skills such as problem-solving, mentoring, and technical communication are essential for this role. Joining this company will expose you to a rewarding work culture where your contributions will define the success of the organization. Bajaj Finance Limited is a leading Non-banking financial company in India and ranks among Asia's top 10 Large workplaces. With over 500 locations across India, this is an opportunity for individuals driven to excel in their careers.,

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a ML Specialist GenAI at Claidroid Technologies Pvt. Ltd., your primary responsibility will be to design, develop, and deploy scalable Generative AI systems. This role involves implementing LLM agents, RAG architectures, and MLOps pipelines, and managing the full GenAI product lifecycle. You will lead AI developers, build solutions using LLMs, Diffusion Models, and GenAI agents, and fine-tune models with LangChain, LangGraph, and promptflow. Deployment will utilize Azure, containerization, and CI/CD practices. It will be your duty to ensure model observability, optimize performance and cost, apply ethical AI principles, and effectively communicate insights to stakeholders. This position is based in Pune/Trivandrum with options for WFH/Hybrid, working from 11 AM to 8 PM IST, suitable for candidates with under a 30-day notice period. To excel in this role, you should possess 5-10 years of experience in ML Engineering and Generative AI. Proficiency in technologies like LLMs, RAG, Prompt Engineering, LangChain, LangGraph, promptflow, and Autogen is essential. Strong skills in Python, PyTorch/TensorFlow, FastAPI, and AsyncIO are required. Experience with CI/CD pipelines (specifically GitHub Actions), Docker, Kubernetes, Azure cloud services (preferred), or AWS/GCP is advantageous. Additionally, expertise in MLOps, model deployment, monitoring, performance tuning, and understanding of Responsible AI principles, bias mitigation, and model explainability are crucial for this role. Claidroid Technologies is a leader in digital transformation, specializing in Enterprise Service Management and Enterprise Security Management solutions. With headquarters in India and offices in Helsinki, Finland, and the USA, we aim to deliver bespoke services to a wider audience globally. Joining our team offers competitive compensation, a hybrid working model, generous benefits such as comprehensive health insurance and performance bonuses, as well as ample opportunities for career growth in a dynamic and innovative environment. Be part of our collaborative and inclusive work culture that values your contributions and supports your professional development.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 day ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

???? Senior AI/ML Engineer Multi-Agent & MCP Architect Location: Mumbai (Hybrid) Company: Fin 100X.AI Indias Protocol-Governed AI Financial OS ???? Company Description Fin 100X.AI is Indias first protocol-governed, AI-powered Financial Operating System built for Bharats 100 crore underserved citizens. We are reimagining finance through: Multi-agent AI orchestration (MCP protocol-driven) Explainable and trustworthy AI Regulatory-first design (RBI, SEBI, IRDAI, PFRDA aligned) Backed by IIT Bombay alumni and top fintech leaders, we are building AI for Bharat that blends technology, trust, and inclusion. ???? Role Overview As a Senior AI/ML Engineer (MCP + Multi-Agent Orchestration), you will be part of the founding tech braintrust that defines the core AI architecture of Fin 100X.AI. Your mission: Design and deploy a world-class, multi-agent AI stack powered by MCP (Model Context Protocol) to deliver scalable, explainable, and reliable AI-driven financial advisory modules for 100 crore users. You will directly influence: AI Laxmi orchestration Credit Booster, SIP Planner, Fraud Shield National-scale agentic financial intelligence ???? Key Responsibilities Architect and implement multi-agent AI systems using MCP protocol Build LLM orchestration pipelines (LangChain, AutoGen, CrewAI, MCP) Design retrieval-augmented pipelines (RAG) with vector memory & routing Deploy neural models (NLP, NLU, recommendation engines) at scale Create fallback, routing, and governance layers for explainable AI Integrate real-time financial intelligence APIs (OpenAI, Gemini, HuggingFace, etc.) Mentor junior AI engineers; enforce best practices in ML Ops and agent orchestration Collaborate with backend/frontend teams for end-to-end AI productionization ? Required Qualifications Core Technical Skills Strong foundation in ML/DL architectures, NLP, and LLMs Expertise in multi-agent orchestration frameworks: MCP, AutoGen, CrewAI, LangGraph Advanced experience with Python (TensorFlow, PyTorch, HuggingFace, FastAPI) Proficient in vector search systems (Pinecone, FAISS, ChromaDB) Deep knowledge of RAG pipelines, memory graphs, prompt chaining Experience deploying models in cloud-native and microservice architectures Preferred Exposure FinTech AI systems or regulated AI environments Agent evaluation, explainability, and safety-first design Scaling AI in hybrid production (cloud + edge) environments ???? What We Value Architect mindset with hands-on coding skills Self-driven innovation and ownership from concept ? deployment Experience building AI systems for 10M+ scale Passion for financial inclusion, Bharat-first AI, and AI for Good ???? Perks Shape Indias AI Financial OS from Day 0 ESOP track + leadership growth National showcase at Global FinTech Fest 2025 Mentorship with IIT Bombay alumni & top AI architects Solve real Bharat-scale problems with MCP-first AI innovation Show more Show less

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

The Data Chapter at our organization serves as a strategic partner, utilizing state-of-the-art AI and ML technologies to drive data-led initiatives that enhance risk management, boost revenue generation, and enhance operational efficiency. We are committed to maintaining data integrity and governance while delivering high-quality analysis, insights, and automation to facilitate data-driven decision-making throughout DBS. Through our collaborative approach, we provide timely and actionable solutions that align with the organization's strategic goals. As a member of our team, your responsibilities will include designing, developing, and implementing advanced AI/ML models to uncover actionable insights, automate processes, and facilitate strategic decision-making across various business units. You will work closely with a team to create the end-to-end lifecycle of AI initiatives, starting from problem identification and data engineering to model development, validation, deployment, and monitoring. Collaboration with cross-functional teams, including product, engineering, and business stakeholders, will be essential to identify impactful AI opportunities and deliver innovative solutions. In this role, you will be expected to explore cutting-edge techniques such as deep learning, generative AI (e.g., LLMs, GANs), and reinforcement learning to address complex challenges and drive value creation. Conducting risk modeling, scenario analysis, and evaluating business impacts will be crucial to ensure that AI solutions are robust, ethical, and aligned with organizational objectives. Additionally, you will contribute to the development of best practices, toolkits, and scalable AI pipelines while staying abreast of emerging trends and research in AI and machine learning to assess their potential application within the organization. To be successful in this role, you should possess 6-8 years of hands-on experience in data science, machine learning, or AI, with a proven track record of deploying models in production environments. Proficiency in Python and common ML/AI libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain, LangGraph, etc., is essential. A solid understanding of data architecture, model deployment (MLOps), and experience with cloud platforms like AWS, Azure, and GCP is required. Demonstrated expertise in generative AI technologies, statistical analysis, feature engineering, and advanced machine learning techniques is highly valued. We are seeking individuals with excellent problem-solving skills, analytical thinking, and business acumen. Strong written and verbal communication abilities are essential, as you will be expected to influence and collaborate with senior stakeholders. A Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field is highly preferred. If you are passionate about AI and machine learning and are looking for a challenging yet rewarding opportunity, we encourage you to apply now. We offer a competitive salary and benefits package, along with the professional advantages of a dynamic environment that fosters your growth and recognizes your accomplishments.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

wayanad, kerala

On-site

As a Senior Full-Stack Engineer at our company, you will play a vital role in architecting and developing our core AI products. Working closely with the founders, you will have the unique opportunity to shape both our technology and company culture. Your responsibilities will include designing and developing scalable AI-driven products from scratch, leading the development of foundational AI technologies and intelligent workflows, setting high technical standards for code quality and performance, building end-to-end solutions using our diverse tech stack, shipping features rapidly in a high-growth environment, and collaborating with the founders on strategic technical decisions. To excel in this role, you should have at least 2 years of experience in AI product development, possess strong problem-solving skills, demonstrate quick learning abilities, excel in cross-functional collaboration, and have a proven track record of end-to-end product ownership. Proficiency in multiple technologies is essential for this role. You should be comfortable with Backend technologies like Python and Golang, Frontend technologies such as Next.js and JavaScript, AI/ML technologies like LangChain, LangGraph, and LLMs, as well as have experience in Mobile development for Android and iOS. If you are passionate about AI technology, enjoy working in a fast-paced environment, and are eager to make a significant impact, we would love to have you on board as part of our team. This job opportunity was posted by Muhasin Rashid from GaugeLabs.,

Posted 2 days ago

Apply

3.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Overview We are seeking a Senior Agentic AI Engineer to lead the design and development of an enterprise-grade Agentic AI framework. This framework will empower various business units across the organization to build intelligent, autonomous agents capable of handling complex workflows, using tools, interacting with APIs, and making decisions with minimal human intervention.This role requires deep expertise in architecting scalable, modular AI systems using LLMs, tool integration, memory systems, and agent orchestration. This role is ideal for someone who has hands-on experience with agentic AI frameworks like LangGraph, CrewAI and AutoGPT and a deep understanding of enterprise software engineering and system architecture. Experience : 3.5 - 6 years Position : AI Engineer Skills Required : Python, Agentic AI, AI Models, Agentic Framework, Agentic Workflows, Langgraph OR CrewAI, Open source, data science Exposure Role and Responsibilities Architect and build a reusable, secure, and scalable Agentic AI framework that can be adopted by multiple teams across the enterprise. Define standards, patterns, and abstractions for building intelligent agents using LLMs and other foundational models. Leverage and extend frameworks like LangGraph, CrewAI, or Semantic Kernel to enable advanced agentic capabilities such as (Long-horizon task planning, Tool calling (APIs, databases, RPA, internal services), Memory persistence and retrieval (via vector stores or knowledge graphs),Autonomous decision-making and reflection,Multi-agent orchestration and collaboration, Human-in-the-loop workflows,Monitoring and observability,Governance and compliance,Security and access control,Performance optimization and cost management Collaborate with cross-functional teams to ensure the framework aligns with enterprise-grade expectations. Integrate the framework with existing enterprise platforms. Conduct research and stay updated on the latest advancements in Agentic AI, LLMs, and related technologies. Mentor and guide junior engineers in best practices for building agentic systems. Coach and guide engineering teams in adopting and extending the framework. Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

???? About KronosX AI Labs At KronosX , were building the AI-native data infrastructure layer for the next generation of enterprises especially in regulated industries like healthcare, insurance, and finance . In a world where AI is rewriting how businesses operate, one thing still holds them back: data chaos . Were changing that through automation, agentic AI, and deep domain understanding . Were a small team with early traction, bold ambitions, and a sharp focus on execution. If youre excited to shape the future of how enterprises interact with AI & Data and want your code to actually drive that change wed love to talk. ???? What you will do As a Full Stack Developer at KronosX, youll be responsible for owning feature delivery end-to-end from backend logic to frontend experience. Youll work closely with our Lead Architect and AI/ML engineers to build scalable, modular systems and ship product fast. Specifically, you will: Build and scale core product modules using Python, FastAPI, and React Integrate LLM agents into the data pipeline workflows Create clean, modular REST APIs and connect them to frontend UIs Build dashboards and interfaces to visualize AI transformations Collaborate with design, ML, and QA to ship features quickly and iterate often Work closely with UI designer to build the front-end. Optimize system performance for large-scale structured datasets Write clean, testable, maintainable code with good documentation ???? What you will work with Frontend : React (preferred), TypeScript, Tailwind Backend : Python (FastAPI or Flask), Postgres, Redis Infra : Docker, GitHub Actions (basic CI/CD), Linux AI layer : LangGraph, OpenAI APIs, Hugging Face, LLM agents ? Must-Haves 4+ years experience in full-stack or backend-heavy roles Strong in Python , REST APIs, and backend architecture Comfortable with frontend integration and clean UI handoff Experience working with structured data workflows Strong debugging, version control, and code hygiene habits Comfortable working in a fast-paced, ambiguous startup environment ???? Bonus (Nice-to-Haves) Exposure to LLM frameworks (LangChain, LlamaIndex, OpenAI APIs) Experience working on agent-based systems or prompt-driven pipelines Healthcare domain experience Familiarity with multi-tenant apps, async tasking (e.g., Celery, RQ), or basic DevOps ???? Why Join Us Be part of a founding team building core layer that will power the next wave of enterprise AI High ownership, zero bureaucracy your decisions will shape the product and platform from day one Work directly with experienced founder, AI/ML engineers, and real enterprise users Gain exposure to cutting-edge tech in agentic AI, LLM orchestration, and data automation Work on a real product with enterprise deployment, not POCs Join a startup with early U.S. traction, a global vision , and massive upside as we scale You grow with us : career growth, learning, and long-term rewards ???? This is an in-person role in Bangalore . We believe the best products come from tight collaboration in the early stages. ???? To apply, send your resume and GitHub (if available) to [HIDDEN TEXT] Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

6 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Define and implement model validation processes and business success criteria in data science terms . Contribute to the architecture and data flow for machine learning models. Rapidly develop and iterate minimum viable solutions (MVS) that address enterprise needs. Conduct advanced data analysis and rigorous testing to enhance model accuracy and performance. Work with Data Architects to leverage existing data models and create new ones as required. Collaborate with product teams and business partners to industrialize machine learning models into Ericsson s enterprise solutions . Build MLOps pipelines for continuous integration, continuous delivery, validation, and monitoring of AI/ML models. Design and implement effective big data storage and retrieval strategies (indexing, partitioning, etc.). Develop and maintain APIs for AI/ML models and optimize data pipelines. Lead end-to-end ML projects from conception to deployment. Stay updated on the latest ML advancements and apply best practices to enterprise AI solutions . Required Skills Experience 4-6 years of hands-on experience in machine learning, AI, and data science . Strong knowledge of ML frameworks (Keras, TensorFlow, Spark ML, etc.). Proficiency in ML algorithms, deep learning, reinforcement learning (RL), and large language models (LLMs) . Expertise in MLOps , including model lifecycle management and monitoring. Experience with containerization orchestration (Docker, Kubernetes, Helm charts). Hands-on expertise with workflow orchestration tools (Kubeflow, Airflow, Argo). Strong programming skills in Python and experience with C++, Scala, Java, R . Experience in API design development for AI/ML models . Hands-on knowledge of Terraform for infrastructure automation. Familiarity with AWS services (Data Lake, Athena, SageMaker, OpenSearch, DynamoDB, Redshift). Strong understanding of self-hosted deployment of LLMs on AWS . Experience in RASA, LangChain, LangGraph, LlamaIndex, Django, Open Policy Agent . Working knowledge of vector databases, knowledge graphs, retrieval-augmented generation (RAG), agents, and agentic mesh architectures . Expertise in monitoring tools like Datadog for K8S environments. Ability to document, present , and communicate technical findings to business stakeholders . Proven ability to contribute to ML forums, patents, and research publications . Educational Qualifications B.Tech/B.E. in Computer Science , MCA, or a Master s in Mathematics/Statistics from a top-tier institute .

Posted 3 days ago

Apply

5.0 - 8.0 years

27 - 30 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Mandatory Skills: 1. 5+ Years of experience in the design & development of state-of-the-art language models; utilize off-the-shelf LLM services, such as Azure OpenAI, to integrate LLM capabilities into applications. 2. Deep understanding of language models and a strong proficiency in designing and implementing RAG-based workflows to enhance content generation and information retrieval. 3. Experience in building, customizing and fine-tuning LLMs via OpenAI studio extended through Azure OpenAI cognitive services for rapid PoCs 4. Proven track record of successfully deploying and optimizing LLM models in the cloud (AWS, Azure, or GCP) for inference in production environments and proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization. 5. Apply prompt engineering techniques to design refined and contextually relevant prompts for language models. 6. Monitor and analyze the performance of LLMs by experimenting with various prompts, evaluating results, and refining strategies accordingly. 7. Building customizable, conversable AI agents for complex tasks using CrewAI and LangGraph to enhance Gen AI solutions 8. Proficient in MCP (Model Context Protocol) for optimizing context-aware AI model performance and integration is a plus Location - Mumbai , Pune, Bangalore, Chennai and Noida

Posted 3 days ago

Apply

4.0 - 7.0 years

25 - 34 Lacs

Pune

Work from Office

Full Stack Engineer (Python, Conversation Text AI) Exp 4 - 7 yrs Location: Pune Full stack Developer with AI/ML experience 2+ YOE in Backend development using Python and Node.JS 1+ YOE in Frontend skills - JavaScript and HTML /CSS

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The role involves working on Agentic AI Platform Delivery where you will be responsible for developing and maintaining autonomous software agents using modern LLM frameworks. You will also be required to build reusable components for business process automation, design agent orchestration, prompt engineering, and LLM integrations, as well as enable deployment across CRM systems and enterprise data platforms. Additionally, you will work on Generative AI & Model Optimization by fine-tuning LLMs/SLMs with proprietary NBFC data, and focusing on model distillation, quantization, and edge deployment readiness. Moreover, you will be creating self-learning systems by developing adaptive frameworks that learn from interaction outcomes and implementing lightweight models to support real-time decision-making. The ideal candidate for this position should hold a B.E./B.Tech/M.Tech degree in Computer Science or a related field, with a minimum of 3-7 years of experience in AI/ML roles. Proficiency in languages such as Python, Node.JS, JavaScript, React, and Java is required, along with experience in using tools/frameworks like LangChain, Semantic Kernel, LangGraph, and CrewAI. Familiarity with platforms such as GCP, MS Foundry, Copilot Studio, BigQuery, and Power Apps/BI is essential. Knowledge of Agent Tools like Agent Development Kit (ADK) and Multi-agent Communication Protocol (MCP), as well as a strong understanding of prompt engineering, LLM integration, and orchestration, are also expected. In addition to the technical requirements, the candidate should possess strong communication skills, teamwork abilities, and problem-solving capabilities. Being proactive, detail-oriented, and able to work under pressure are desirable traits for this role. Joining this dynamic team at Bajaj Finance Limited offers numerous perks, benefits, and a vibrant work culture. The company values its employees and recognizes their contributions, making it a rewarding and challenging environment to grow and excel. With a presence in over 500 locations across India, you will have the opportunity to be part of one of Asia's top 10 Large workplaces and contribute to the success and innovation of a leading Non-banking financial company.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,

Posted 6 days ago

Apply

2.0 - 10.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have 3 to 10 years of experience in AI development and be located in Coimbatore. Immediate joiners are preferred. A minimum of 2 years of experience in core Gen AI is required. As an AI Developer, your responsibilities will include designing, developing, and fine-tuning Large Language Models (LLMs) for various in-house applications. You will implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Additionally, you will develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Building and managing data pipelines for processing, transforming, and feeding structured/unstructured data into AI models will be part of your role. It is essential to ensure scalability, performance, and security of AI-driven solutions in production environments. Collaboration with cross-functional teams, including data engineers, software developers, and product managers, is expected. You will conduct experiments and evaluations to improve AI system accuracy and efficiency while staying updated with the latest advancements in AI/ML research, open-source models, and industry best practices. You should have strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases such as Pinecone, ChromaDB, Weaviate, OpenSearch, and FAISS, is required. Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks is preferred. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow is necessary. Experience in Python web frameworks such as FastAPI, Django, or Flask is expected. You should also have experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes) is essential. Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications is a plus. A strong understanding of vector search, embedding models, and hybrid retrieval techniques is required. Experience with optimizing inference and serving AI models in real-time production systems is beneficial. Experience with multi-modal AI (text, image, audio) and familiarity with privacy-preserving AI techniques and responsible AI frameworks are desirable. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation, is a plus. Skills required for this role include PyTorch, RAG architectures, OpenSearch, Weaviate, Docker, LLM fine-tuning, ChromaDB, Apache Airflow, LoRA, Python, hybrid retrieval techniques, Django, GCP, CrewAI, OpenAI, Hugging Face, Gen AI, Pinecone, FAISS, AWS, AutoGPT, embedding models, Flask, FastAPI, LLM APIs, DeepSpeed, vector search, PEFT, LangChain, Azure, Spark, Kubernetes, AI Gen, TensorFlow, real-time production systems, LangGraph, and Kafka.,

Posted 1 week ago

Apply

4.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

We are conducting an in-person hiring drive for the position of AIML Engineer in Hyderabad on 26th July 2025. Interview Location is mentioned below: Interview Location: Gate 11, Argus Block-Sattva Knowledge City, 6th Floor Beside T-hub, Silpa Gram Craft Village, Madhapur Rai Durg, Hyderabad, 500081. We are looking for an experienced and talented AIML Developer to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI, ML, LangChain, LangGraph, Microservices (Fast API), Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: AIML Engineer Job Location: All Persistent Location Experience: 4 -12 Years Job Type: Full Time Employment What You'll Do: - Develop and fine-tune LLMs for real-world use cases - Build and optimize RAG pipelines using Langchain or similar tools - Integrate GenAI capabilities into enterprise applications - Hands-on Python coding experience - Work with vector databases and embedding techniques - Collaborate with cross-functional teams on AI-driven features We're looking for a skilled engineer with hands-on experience in LLMs, LangChain, and chatbot development. Expertise You'll Bring: - 4+ years of experience working with data science - Strong Python skills and familiarity with TensorFlow, PyTorch, and scikit-learn are essential - Basic understanding of Flask, Django, or FastAPI is a plus - Strong analytical, problem-solving, and troubleshooting skills - Ability to work independently and in a collaborative team environment - Cloud deployment knowledge on AWS, Azure, or GCP is a must You'll work on real-world GenAI applications in a collaborative, fast-paced environment. Stay updated with the latest in GenAI and contribute to innovative solutions. Benefits: - Competitive salary and benefits package - Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications - Opportunity to work with cutting-edge technologies - Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards - Annual health check-ups - Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: - Accelerate growth, both professionally and personally - Impact the world in powerful, positive ways, using the latest technologies - Enjoy collaborative innovation, with diversity and work-life wellbeing at the core - Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

The selected candidate will be working for our Indian JV - D2AI Labs in Bangalore/Chennai/Remote. Key Responsibilities: Lead Agent Development: Architect, build, and deploy the central interface agent that connects and coordinates specialized AI agents. Agent Orchestration: Implement reliable communication, delegation, and data-handling mechanisms between agents. LLM Integration: Fine-tune and integrate LLMs (Azure OpenAI or similar), utilizing advanced prompt engineering techniques. MLOps Implementation: Maintain robust MLOps pipelines, CI/CD workflows, and performance monitoring on Azure. Vector Database Management: Design data pipelines using vector databases like Pinecone, Qdrant, or Milvus for RAG and agent memory. A2A Communication & MCP: Enable advanced agent-to-agent communication using established protocols like MCP/MCPTx. Collaborate with cross-functional teams to translate business requirements into scalable AI/ML solutions. Optimize AI models for performance, scalability, and accuracy. Conduct thorough testing, evaluation, and documentation of models and systems. Stay up to date with the latest advancements in AI, particularly in the Generative AI space. Qualifications & Skills: Proven experience in LLM model development (e.g., OpenAI, GPT, BERT, etc.). Hands-on experience in building GenAI agents and conversational workflows. Strong Python programming skills with a focus on AI/ML libraries and frameworks (e.g., PyTorch, TensorFlow, Hugging Face). Proven experience building or orchestrating multi-agent systems using frameworks like LangGraph. Prior work experience in the Telecom industry with a clear understanding of domain-specific challenges.,

Posted 1 week ago

Apply

4.0 - 8.0 years

25 - 35 Lacs

Pune, Mumbai (All Areas)

Work from Office

Develop Conversational Text AI platforms using modern LLM frameworks. Work across backend (Python, Node.js), frontend (JavaScript, HTML/CSS), databases, and AI/ML stacks. Required Candidate profile Proficient in Python, Node.js, JavaScript, HTML/CSS, AI/ML (LLMs, embeddings, prompt engineering), Redis, PostgreSQL, Cosmos DB, DevOps, CI/CD. Bachelor’s or Master’s in CS/related field.

Posted 1 week ago

Apply

3.0 - 5.0 years

7 - 17 Lacs

Pune

Hybrid

We are looking for a skilled Senior Data Scientist with experience in building cutting-edge AI solutions, particularly in the domains of Generative AI, Agentic AI, and LLM-based architectures. If youre passionate about developing intelligent, autonomous systems, this role offers an exciting opportunity to work on impactful, next-gen AI initiatives. Responsibilities Architect and implement multi-agent AI systems leveraging LLM frameworks such as LangChain, LangGraph, and AutoGen. Develop context-aware RAG pipelines to power grounded, relevant AI outputs. Design and deploy scalable Generative AI services using FastAPI and integrate them with Azure AI pipelines. Orchestrate complex agent workflows involving memory, planning, and decision-making tools. Partner closely with engineers, product teams, and legal domain experts to identify automation opportunities within legal and compliance workflows. Apply NLP techniques to process and extract insights from unstructured legal data. Manage and optimize structured and unstructured data pipelines using tools like PostgreSQL, Solr, and CosmosDB Key Requirements Strong command of Python and relevant libraries such as LangChain, LangGraph, Transformers, spaCy, and scikit-learn. Proven experience working with Large Language Models (GPT, Claude, LLaMA, etc.), including fine-tuning and customization. Familiarity with agentic AI concepts and hands-on experience developing tool-using, reasoning agents. Skilled in building APIs using FastAPI and managing ML workflows with MLflow. Experience working with cloud-based AI platforms such as Azure AI, OpenAI APIs, or AWS Bedrock. Deep understanding of RAG architectures, semantic search, and vector databases. Proficient in SQL, comfortable working on Linux, and experienced with containerized and distributed environments. Strong grasp of data structures, statistical modelling, and backend database architecture.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Dreaming big is in our DNA. It's who we are as a company. It's our culture. It's our heritage. And more than ever, it's our future. A future where we're always looking forward. Always serving up new ways to meet life's moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources, and opportunities to unleash their full potential. The power we create together when we combine your strengths with ours is unstoppable. Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big We Need You. **Job Title:** Junior Data Scientist **Location:** Bangalore **Reporting to:** Senior Manager - Analytics **Purpose of the role:** The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of LLM-based frameworks, tools, and technologies, cloud-native technologies and solutions, and microservices-based software architecture and design patterns. As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. **Key tasks & accountabilities:** **Large Language Models (LLM):** - Experience with LangChain, LangGraph - Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler **Multi-modal Retrieval-Augmented Generation (RAG):** - Expertise in multi-modal AI systems (text, images, audio, video) - Designing and optimizing chunking strategies and clustering for large data processing **Streaming & Real-time Processing:** - Experience in audio/video streaming and real-time data pipelines - Low-latency inference and deployment architectures **NL2SQL:** - Natural language-driven SQL generation for databases - Experience with natural language interfaces to databases and query optimization **API Development:** - Building scalable APIs with FastAPI for AI model serving **Containerization & Orchestration:** - Proficient with Docker for containerized AI services - Experience with orchestration tools for deploying and managing services **Data Processing & Pipelines:** - Experience with chunking strategies for efficient document processing - Building data pipelines to handle large-scale data for AI model training and inference **AI Frameworks & Tools:** - Experience with AI/ML frameworks like TensorFlow, PyTorch - Proficiency in LangChain, LangGraph, and other LLM-related technologies **Prompt Engineering:** - Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting - Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy - Strong understanding of context window management and optimizing prompts for performance and efficiency **Qualifications, Experience, Skills:** **Level of educational attainment required (1 or more of the following):** - Bachelor's or master's degree in Computer Science, Engineering, or a related field. **Previous Work Experience Required:** - Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. **Technical Skills Required:** - Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. - Proficiency in implementing and optimizing machine learning models for natural language processing. - Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. - Strong programming skills in languages such as Python and proficiency in relevant frameworks. - Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create a future with more cheer.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a member of our team, you will be responsible for working on the Conversational Text AI Platform, where your primary tasks will include building and maintaining the system using cutting-edge LLM frameworks. You will collaborate closely with product owners and domain experts to develop reusable components for various business processes. Additionally, you will play a key role in developing core infrastructure and reusable components to facilitate the deployment of conversational AI systems. Your work will involve orchestration, prompt engineering, and integrating LLM-powered solutions with enterprise data platforms. In the realm of Generative AI & Model Optimization, you will be engaged in fine-tuning LLMs/SLMs using proprietary NBFC data, as well as performing distillation and quantization of models for edge deployment. Your responsibilities will also include evaluating and running LLM/SLM models on local/edge server machines. Furthermore, you will have the opportunity to build self-learning systems that can adapt without requiring full retraining, enabling real-time learning on the edge through lightweight local models. The ideal candidate for this role will possess a Bachelor's or Master's degree in computer science, engineering, or a related field, along with a minimum of 7 years of experience in Python, Node.JS, JavaScript, HTML/CSS, Redis, Postgres, Azure COSMOS, DevOps, and CI/CD, with exposure to AI/ML. Strong programming skills in Python, Node.JS, JavaScript, and HTML/CSS are essential, along with familiarity with Redis, Postgres, Vector Embeddings, Speech-to-Text & Text-to-Speech Services, Azure COSMOS, DevOps, CI/CD, Lang-Chain, or Lang-Graph. Experience in building or integrating LLMs for task automation, reasoning, or autonomous workflows, as well as a solid understanding of prompt engineering, tool calling, and agent orchestration, will be highly valued.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

The Senior Data Science Lead is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. Key Responsibilities: 1. Architecting & Scaling Agentic AI Solutions: - Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. - Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. - Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. 2. Hands-On Development & Optimization: - Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. - Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. - Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. 3. Driving AI Innovation & Research: - Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. - Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. - Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. 4. AI Strategy & Business Impact: - Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. - Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. 5. Mentorship & Capability Building: - Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. - Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

jaipur, rajasthan

On-site

As a skilled and visionary AI Lead at Matellio, you will be responsible for driving the development of advanced AI solutions. This dual role will require you to provide technical leadership and hands-on development expertise, making it an ideal position for individuals who excel at solving complex problems using machine learning, NLP, and LLM technologies. Your key responsibilities will include leading and mentoring a team of 45 AI/ML engineers, overseeing the entire project lifecycle from research to production deployment. You will be tasked with designing and owning high-level and low-level architecture for AI and GenAI-based applications, ensuring scalability, performance, and maintainability. Collaboration with sales and business development teams to create technical proposals, solution designs, and architecture diagrams for Sales Qualified Leads (SQLs) will also be part of your role. Additionally, you will work as an individual contributor when necessary, focusing on developing key components of solutions to accelerate delivery and establish best practices. Your responsibilities will extend to researching, designing, and implementing machine learning and deep learning models, including fine-tuning and deploying LLMs like GPT and BERT for production use cases. Building NLP pipelines and retrieval-augmented generation (RAG) systems using tools such as LangChain, LangGraph, and vector databases like FAISS, Pinecone, and Weaviate will also be crucial. You will be expected to apply statistical techniques to feature engineering, model evaluation, and performance optimization, ensuring adherence to code quality, testing, CI/CD, and documentation standards. Staying up-to-date with the latest advancements in AI/ML and proposing innovative solutions will be essential. Conducting code reviews, promoting peer learning, and fostering a strong engineering culture will also be part of your responsibilities. To qualify for this role, you should hold a Bachelors, Masters, or Ph.D. in Computer Science, Engineering, Mathematics, or a related field. Additionally, you should have at least 7 years of hands-on experience in AI/ML model development and deployment. Proven expertise in leading technical teams and delivering production-grade AI/ML solutions is required. Strong architectural skills with experience in designing end-to-end solutions involving cloud, APIs, and ML models are essential. Proficiency in Python and ML libraries such as TensorFlow, PyTorch, and scikit-learn is necessary, along with hands-on experience in NLP tools like HuggingFace Transformers, SpaCy, and NLTK. Practical knowledge of fine-tuning and deploying LLMs, as well as building GenAI solutions, is expected. Familiarity with tools like LangChain, LangGraph, and vector stores (e.g., FAISS, Pinecone) will be advantageous. A solid understanding of classical ML algorithms (SVM, Decision Trees, etc.) and when to utilize them, experience deploying models and services using REST APIs, Docker, and CI/CD pipelines, and exposure to cloud platforms such as AWS, GCP, or Azure are highly desirable. Excellent analytical thinking, problem-solving, and communication skills will be crucial for success in this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

The Senior Data Scientist - R01551342 position is for a Data Science Lead with expertise in problem formulation, OKR validation, data wrangling, data storytelling, problem solving, Excel VBA, data curiosity, technical decision making, communication & articulation, business acumen, design thinking, and data literacy. As the Agentic AI Lead, you will play a crucial role in researching, developing, and deploying semi-autonomous AI agents to address complex enterprise challenges. You will be required to leverage LangGraph, lead projects aimed at constructing multi-agent AI systems with enhanced autonomy and decision-making capabilities. The ideal candidate should possess in-depth knowledge of LLM orchestration, knowledge graphs, reinforcement learning, and real-world AI applications. You will be tasked with designing, scaling, and optimizing agentic AI workflows to align with business objectives and drive innovation in AI automation. Key Responsibilities: 1. Architecting & Scaling Agentic AI Solutions: - Design and develop multi-agent AI systems using LangGraph for workflow automation and decision-making. - Create memory-augmented AI agents capable of planning, reasoning, and executing tasks across various domains. - Implement scalable architectures for LLM-powered agents that integrate seamlessly with enterprise applications. 2. Hands-On Development & Optimization: - Develop and optimize agent orchestration workflows with LangGraph for high performance and scalability. - Use knowledge graphs and retrieval-augmented generation techniques to enhance agent reasoning. - Apply reinforcement learning methodologies for improved decision-making. 3. Driving AI Innovation & Research: - Lead AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. - Stay updated on advancements in multi-agent systems and goal-directed behavior to apply best practices in enterprise AI solutions. - Prototype self-learning AI agents for autonomous adaptation based on real-time feedback. 4. AI Strategy & Business Impact: - Translate Agentic AI capabilities into enterprise solutions for automation, efficiency, and cost savings. - Lead proof-of-concept projects to demonstrate business impact and scale successful prototypes into production. 5. Mentorship & Capability Building: - Mentor AI Engineers and Data Scientists to build expertise in LangGraph and multi-agent architectures. - Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents. This role requires a strategic thinker with strong technical skills and a passion for innovation in AI technologies. The successful candidate will drive AI initiatives, lead research efforts, and mentor a team to deliver impactful solutions in a fast-paced environment.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Backend Developer, your primary responsibility will be to develop and maintain backend services using Python, with a strong emphasis on FastAPI, NumPy, and Polars for efficient handling of data. You will be tasked with building and managing agentic workflows utilizing LangChain, LangGraph, and MCP Agents to facilitate dynamic, multi-step reasoning. Additionally, you will design and execute RAG pipelines for contextual information retrieval and response generation. Integration and optimization of MongoDB or similar vector databases like FAISS and Pinecone for semantic search and embedding storage will also fall under your purview. Collaboration with cross-functional teams to deploy scalable AI services in production will be a crucial part of your role. Furthermore, you will be responsible for conducting performance tuning, testing, and deployment of AI components, while also keeping abreast of the latest developments in GenAI, LLMs, and agentic architectures. The ideal candidate for this position should possess a minimum of 3-6 years of experience in backend development using Python. You should have hands-on experience with FastAPI for constructing RESTful APIs, as well as proficiency in NumPy and Polars for numerical and tabular data processing. A solid understanding of Generative AI concepts is essential, along with practical experience in working with LangChain, LangGraph, and MCP Agents. Experience in building and deploying agentic RAG systems and familiarity with MongoDB or other vector databases for semantic search and retrieval will be advantageous. Knowledge of cloud platforms such as Azure and containerization tools like Docker/Kubernetes will be considered a plus. To qualify for this role, you should hold a Bachelors or Masters degree in computer science, data science, mathematics, or a related field.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Generative AI Engineer, your primary role will involve conducting original research on generative AI models. You will focus on exploring model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. It is essential to maintain a strong publication record in esteemed conferences and journals, demonstrating your valuable contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). In addition, you will be responsible for designing and experimenting with multimodal generative models that incorporate various data types such as text, images, and other modalities to enhance AI capabilities. Your expertise will be crucial in developing autonomous AI systems that exhibit agentic behavior, enabling them to make independent decisions and adapt to dynamic environments. Leading the design, development, and implementation of generative AI models and systems will be a key aspect of your role. This involves selecting suitable models, training them on extensive datasets, fine-tuning hyperparameters, and optimizing overall performance. It is imperative to have a deep understanding of the problem domain to ensure effective model development and implementation. Furthermore, you will be tasked with optimizing generative AI algorithms to enhance their efficiency, scalability, and computational performance. Techniques such as parallelization, distributed computing, and hardware acceleration will be utilized to maximize the capabilities of modern computing architectures. Managing large datasets through data preprocessing and feature engineering to extract critical information for generative AI models will also be a crucial aspect of your responsibilities. Your role will also involve evaluating the performance of generative AI models using relevant metrics and validation techniques. By conducting experiments, analyzing results, and iteratively refining models, you will work towards achieving desired performance benchmarks. Providing technical leadership and mentorship to junior team members, guiding their development in generative AI, will also be part of your responsibilities. Documenting research findings, model architectures, methodologies, and experimental results thoroughly is essential. You will prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Additionally, staying updated on the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities is crucial to foster a culture of learning and innovation within the team. Mandatory technical skills for this role include strong programming abilities in Python and familiarity with frameworks like PyTorch or TensorFlow. In-depth knowledge of Deep Learning concepts such as CNN, RNN, LSTM, Transformers LLMs (BERT, GEPT, etc.), and NLP algorithms is required. Experience with frameworks like Langgraph, CrewAI, or Autogen for developing, deploying, and evaluating AI agents is also essential. Preferred technical skills include expertise in cloud computing, particularly with Google/AWS/Azure Cloud Platform, and understanding Data Analytics Services offered by these platforms. Hands-on experience with ML platforms like GCP: Vertex AI, Azure: AI Foundry, or AWS SageMaker is desirable. Strong communication skills, the ability to work independently with minimal supervision, and a proactive approach to escalate when necessary are also key attributes for this role. If you have a Master's or PhD degree in Computer Science and 6 to 8 years of experience with a strong record of publications in top-tier conferences and journals, this role could be a great fit for you. Preference will be given to research scholars from esteemed institutions like IITs, NITs, and IIITs.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies