Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
nagpur, maharashtra
On-site
As a Data Scientist (Generative AI) Lead with 4-6 years of experience based in Nagpur, you will play a crucial role in transforming raw data into valuable insights and cutting-edge AI products. Your responsibilities will encompass the entire project lifecycle, from defining business questions and analyzing datasets to deploying and monitoring models on the cloud. Additionally, you will spearhead generative-AI projects like chat assistants and retrieval-augmented search, manage a team of data professionals, and collaborate closely with product, engineering, and business teams to deliver tangible value. Your main responsibilities will include designing and implementing predictive and forecasting models that drive business impact, conducting A/B experiments to validate ideas and inform product decisions, establishing and maintaining data pipelines to ensure data integrity and timeliness, leading generative-AI initiatives such as LLM-powered chat and RAG search, as well as deploying and monitoring models using contemporary MLOps practices in the public cloud. You will also be in charge of setting up monitoring and alerting systems for accuracy, latency, drift, and cost, mentoring your team, translating complex insights into actionable narratives for non-technical stakeholders, ensuring data governance and privacy compliance, and staying abreast of new tools and methodologies to keep solutions at the cutting edge. To excel in this role, you should have a strong foundation in statistics, experiment design, and end-to-end ML workflows, proficiency in Python and SQL with a track record of transitioning models from notebooks to production, hands-on experience with cloud platforms like AWS, Azure, or GCP including container-based deployment and CI/CD pipelines, and practical exposure to generative-AI projects involving prompt engineering, fine-tuning, and retrieval-augmented pipelines. If you are passionate about leveraging data science to drive innovation, possess strong technical skills, and enjoy collaborating with cross-functional teams to deliver impactful solutions, this role offers an exciting opportunity to lead transformative AI projects and make a significant contribution to the organization's success.,
Posted 1 week ago
9.0 - 14.0 years
32 - 37 Lacs
Hyderabad
Work from Office
Mandate Skills : Python + 2.5+ 3 Years of current experience in GEN(GENERATIVE) AI (ARTIFICIAL INTELLIGENCE) Location – Hyderabad Notice - Immediate to 30 Days Budget - 34-38 LPA "Kashif@d2nsolutions.com"
Posted 1 month ago
7 - 12 years
35 - 50 Lacs
Pune
Hybrid
Overview: We are looking for a hands-on, full-cycle AI/ML Engineer who will play a central role in developing a cutting-edge AI agent platform. This platform is designed to automate and optimize complex workflows by leveraging large language models (LLMs), retrieval-augmented generation (RAG), knowledge graphs, and agent orchestration frameworks. As the AI/ML Engineer, you will be responsible for building intelligent agents from the ground up including prompt design, retrieval pipelines, fine-tuning models, and deploying them in a secure, scalable cloud environment. Youll also implement caching strategies, handle backend integration, and prototype user interfaces for internal and client testing. This role requires deep technical skills, autonomy, and a passion for bringing applied AI solutions into real-world use. Key Responsibilities: Design and implement modular AI agents using large language models (LLMs) to automate and optimize a variety of complex workflows Deploy and maintain end-to-end agent/AI workflows and services in cloud environments, ensuring reliability, scalability, and low-latency performance for production use Build and orchestrate multi-agent systems using frameworks like LangGraph or CrewAI, supporting context-aware, multi-step reasoning and task execution Develop and optimize retrieval-augmented generation (RAG) pipelines using vector databases (e.g., Qdrant, Pinecone, FAISS) to power semantic search and intelligent document workflows Fine-tune LLMs using frameworks such as Hugging Face Transformers, LoRA/PEFT, DeepSpeed, or Accelerate to create domain-adapted models Integrate knowledge graphs (e.g., Neo4j, AWS Neptune) into agent pipelines for context enhancement, reasoning, and relationship modeling Implement cache-augmented generation strategies using semantic caching and tools like Redis or vector similarity to reduce latency and improve consistency Build scalable backend services using FastAPI or Flask and develop lightweight user interfaces or prototypes with tools like Streamlit, Gradio, or React Monitor and evaluate model and agent performance using prompt testing, feedback loops, observability tools, and safe AI practices Collaborate with architects, product managers, and other developers to translate problem statements into scalable, reliable, and explainable AI systems Stay updated on the latest in cloud platforms (AWS/GCP/Azure), software frameworks, agentic frameworks, and AI/ML technologies Prerequisites: Strong Python development skills, including API development and service integration Experience with LLM APIs (OpenAI, Anthropic, Hugging Face), agent frameworks (LangChain, LangGraph, CrewAI), and prompt engineering Experience deploying AI-powered applications using Docker, cloud infrastructure (Azure preferred), and managing inference endpoints, vector DBs, and knowledge graph integrations in a live production setting Proven experience with RAG pipelines and vector databases (Qdrant, Pinecone, FAISS) Hands-on experience fine-tuning LLMs using PyTorch, Hugging Face Transformers, and optionally TensorFlow, with knowledge of LoRA, PEFT, or distributed training tools like DeepSpeed Familiarity with knowledge graphs and graph databases such as Neo4j or AWS Neptune, including schema design and Cypher/Gremlin querying Basic frontend prototyping skills using Streamlit or Gradio, and ability to work with frontend teams if needed Working knowledge of MLOps practices (e.g., MLflow, Weights & Biases), containerization (Docker), Git, and CI/CD workflows Cloud deployment experience with Azure, AWS, or GCP environments Understanding of caching strategies, embedding-based similarity, and response optimization through semantic caching Preferred Qualifications: Bachelor’s degree in Technology (B.Tech) or Master of Computer Applications (MCA) is required; MS in similar field preferred 7–10 years of experience in AI/ML, with at least 2 years focused on large language models, applied NLP, or agent-based systems Demonstrated ability to build and ship real-world AI-powered applications or platforms, preferably involving agents or LLM-centric workflows Strong analytical, problem-solving, and communication skills Ability to work independently in a fast-moving, collaborative, and cross-functional environment Prior experience in startups, innovation labs, or consulting firms a plus Compensation: The compensation structure will be discussed during the interview
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough