Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Prompt Engineering Crafting effective prompts for models like GPT DALLE and Codex. Understanding prompt tuning chaining and context management. API Integration Using APIs from OpenAI Hugging Face Cohere etc. Handling authentication rate limits and response parsing. Model Fine-Tuning & Customization Fine-tuning open-source models (e.g. LLaMA Mistral Falcon). Using tools like LoRA PEFT and Hugging Face Transformers. Responsibilities Prompt Engineering Crafting effective prompts for models like GPT DALLE and Codex. Understanding prompt tuning chaining and context management. API Integration Using APIs from OpenAI Hugging Face Cohere etc. Handling authentication rate limits and response parsing. Model Fine-Tuning & Customization Fine-tuning open-source models (e.g. LLaMA Mistral Falcon). Using tools like LoRA PEFT and Hugging Face Transformers. Data Engineering for AI Collecting cleaning and preparing datasets for training or inference. Understanding tokenization and embeddings. LangChain / LlamaIndex Building AI-powered apps with memory tools and retrieval-augmented generation (RAG). Connecting LLMs to external data sources like PDFs databases or APIs. Vector Databases Using Pinecone Weaviate FAISS or Chroma for semantic search and RAG. Understanding embeddings and similarity search. Frontend + GenAI Integration Building GenAI-powered UIs with React Next.js or Flutter. Integrating chatbots image generators or code assistants 8. Tools : OpenAI HuggingFace LangChain/LamaIndex
Posted 19 hours ago
2.0 - 5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
About the Role We are seeking a highly motivated and creative Platform Engineer with a true research mindset. This is a unique opportunity to move beyond traditional development and step into a role where you will ideate, prototype, and build production-grade applications powered by Generative AI. You will be a core member of a platform team, responsible for developing both internal and customer-facing solutions that are not just functional but intelligent. If you are passionate about the MERN stack, Python, and the limitless possibilities of Large Language Models (LLMs), and you thrive on building things from the ground up, this role is for you. Core Responsibilities Innovate and Build: Design, develop, and deploy full-stack platform applications integrated with Generative AI, from concept to production. AI-Powered Product Development: Create and enhance key products such as: Intelligent chatbots for customer service and internal support. Automated quality analysis and call auditing systems using LLMs for transcription and sentiment analysis. AI-driven internal portals and dashboards to surface insights and streamline workflows. Full-Stack Engineering: Write clean, scalable, and robust code across the MERN stack (MongoDB, Express.js, React, Node.js) and Python. Gen AI Integration & Optimization: Work hands-on with foundation LLMs, fine-tuning custom models, and implementing advanced prompting techniques (zero-shot, few-shot) to solve specific business problems. Research & Prototyping: Explore and implement cutting-edge AI techniques, including setting up systems for offline LLM inference to ensure privacy and performance. Collaboration: Partner closely with product managers, designers, and business stakeholders to transform ideas into tangible, high-impact technical solutions. Required Skills & Experience Experience: 2-5 years of professional experience in a software engineering role. Full-Stack Proficiency: Strong command of the MERN stack (MongoDB, Express.js, React, Node.js) for building modern web applications. Python Expertise: Solid programming skills in Python, especially for backend services and AI/ML workloads. Generative AI & LLM Experience (Must-Have): Demonstrable experience integrating with foundation LLMs (e.g., OpenAI API, Llama, Mistral, etc.). Hands-on experience building complex AI systems and implementing architectures such as Retrieval-Augmented Generation (RAG) to ground models with external knowledge. Practical experience with AI application frameworks like LangChain and LangGraph to create agentic, multi-step workflows. Deep understanding of prompt engineering techniques (zero-shot, few-shot prompting). Experience or strong theoretical understanding of fine-tuning custom models for specific domains. Familiarity with concepts or practical experience in deploying LLMs for offline inference . R&D Mindset: A natural curiosity and passion for learning, experimenting with new technologies, and solving problems in novel ways. Bonus Points (Nice-to-Haves) Cloud Knowledge: Hands-on experience with AWS services (e.g., EC2, S3, Lambda, SageMaker).
Posted 20 hours ago
0.0 - 3.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Humanli.AI is a Startup founded by Alumnus of IIM' Bangalore/ISB Hyderabad & IIM' Calcutta. We are democratizing and extending technology that were accessible & consumed only by MNC’s or Fortune companies to SME and Mid-size firms. We are pioneers in bringing Knowledge Management algorithms & Large Language Models into Conversational BOT framework. Job Title: AI/ML Engineer Location: Jaipur Job Type: Full-time Experience: 0-3 years Job Description: We are looking for an experienced AI/ML & Data Engineer to join our team and contribute to the development and deployment of our AI-based solutions. As an AI/ML & Data Engineer, you will be responsible for designing and implementing data models, algorithms, and pipelines for training and deploying machine learning models. Responsibilities: Design, develop, and fine-tune Generative AI models (e.g., LLMs, GANs, VAEs, Diffusion Models). Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases (FAISS, Pinecone, ChromaDB, Weaviate). Develop and integrate AI Agents for task automation, reasoning, and decision-making. Work on fine-tuning open-source LLMs (e.g., LLaMA, Mistral, Falcon) for specific applications. Optimize and deploy transformer-based architectures for NLP and vision-based tasks. Train models using TensorFlow, PyTorch, Hugging Face Transformers. Work on prompt engineering, instruction tuning, and reinforcement learning (RLHF). Collaborate with data scientists and engineers to integrate models into production systems. Stay updated with the latest advancements in Generative AI, ML, and DL. Optimize models for performance improvements, including quantization, pruning, and low-latency inference techniques. Qualification: B.Tech in Computer Science. Fresher's may apply 0-3 years of experience in data engineering and machine learning. Immediate joiners: Preferred Requirement Experience with data preprocessing, feature engineering, and model evaluation. Understanding of transformers, attention mechanisms, and large-scale training. Hands-on experience with, RAG, LangChain/LangGraph, LlamaIndex, and other agent frameworks. Understanding of prompt tuning, LoRA/QLora, and efficient parameter fine-tuning (PEFT) techniques. Strong knowledge of data modeling, data preprocessing, and feature engineering techniques. Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Excellent problem-solving skills and ability to work independently and collaboratively in a team environment. Strong communication skills and ability to explain technical concepts to non-technical stakeholders.
Posted 23 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : AI Engineer with Full stack experience Location : Hyderabad Job Type : Full-Time Role Overview We at Arnsoft are looking for an AI Engineer from Tier 1 College with Full Stack experience (React/Java/Python) to help us design, build, and deploy cutting-edge AI systems. This role involves working at the intersection of software engineering and artificial intelligencecontributing to the development of Co-pilots, AI agents, private GPTs, SLMs and Virtual Advisors that solve real-world problems. You'll collaborate closely with internal teams and clients, write production-grade AI/ML code, and play a key role in delivering intelligent, scalable solutions. You Should Have Experience With Building Agentic Systems and AI-powered applications Working with LLMs like OpenAI, Claude, Mistral, or LLaMA Building and fine-tuning SLMs (Small Language Models) for lightweight, focused use cases Developing full stack applications using React, Java, and/or Python Cloud deployment (AWS, GCP, Azure) Prompt engineering, RAG pipelines, and LLM fine-tuning Tools like LangChain, LlamaIndex, or similar Vector databases (Pinecone, Weaviate, FAISS) API integration and backend orchestration Worked with websockets and tools like VAD Key Responsibilities Write clean, efficient, and well-documented code for AI/ML applications. Collaborate with cross-functional teams to understand project requirements and contribute to technical solutions. Help in building and maintaining data pipelines for our AI systems. Ability to write clear documentation Stay updated with the latest advancements in AI, machine learning, and deep learning. Work directly with clients to understand their needs, communicate progress, and maintain alignment throughout the project lifecycle Required Qualifications & Skills Education : A Bachelor's or Master's degree in Computer Science, IT, or a related engineering discipline from a Tier 1 institutions(Must). Academic Performance : Consistent and strong academic record. Core Concepts : Solid understanding of fundamental AI, Machine Learning, and Deep Learning concepts. Programming : Strong programming skills in Python. Foundations : Excellent knowledge of Data Structures, Algorithms, and Object Oriented Programming (OOPS). Problem-Solving : Strong analytical and problem-solving abilities with a keen attention to detail. Communication : Excellent verbal and written communication skills. Teamwork : A collaborative mindset with the ability to work effectively in a team environment How To Apply If you're excited about the role and believe you have what it takes to excel in this role, we'd love to hear from you! Note : People who have worked on AI projects or have full stack development experience and are looking to transition into AI are also encouraged to apply. (ref:hirist.tech)
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluatio Responsibilities Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.
Posted 1 day ago
6.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Experience: 6-8 years Location: Remote Domain: Creator Economy, AI Video Tools 🔧Responsibilities: Develop, test, and deploy LLM-driven agents to automate content strategy, scriptwriting, and moderation Implement Agentic workflows using frameworks like LangChain, LangGraph, or CrewAI Design real-time script analyzers to flag and intercept deepfake/misuse risks Integrate voice cloning, avatar generation APIs, and safety intercept logic Build structured conversation memory, action plans, and tool calling agents for video planning Collaborate with backend team to expose agent actions via API ✅ Requirements: Strong Python skills with experience in AI orchestration frameworks Hands-on with OpenAI, Anthropic, LLama, or Mistral APIs Experience with vector DBs (e.g. FAISS, Weaviate) for content memory Deep understanding of prompt engineering, function calling, RAG, or agent autonomy Awareness of deepfake, safety, or fairness risks in generative AI Good to have: Experience with HuggingFace, LangGraph, or Guardrails AI
Posted 1 day ago
2.0 - 3.0 years
2 - 6 Lacs
Hyderābād
Remote
We are looking for a highly motivated and skilled Generative AI (GenAI) Developer to join our dynamic team. You will be responsible for building and deploying GenAI solutions using large language models (LLMs) to address real-world business challenges. The role involves working with cross-functional teams, applying prompt engineering and fine-tuning techniques, and building scalable AI-driven applications. A strong foundation in machine learning, NLP, and a passion for emerging GenAI technologies is essential. Responsibilities Design, develop, and implement GenAI solutions using large language models (LLMs) to address specific business needs using Python. Collaborate with stakeholders to identify opportunities for GenAI integration and translate requirements into scalable solutions. Preprocess and analyze unstructured data (text, documents, etc.) for model training, fine-tuning, and evaluation. Apply prompt engineering, fine-tuning, and RAG (Retrieval-Augmented Generation) techniques to optimize LLM outputs. Deploy GenAI models and APIs into production environments, ensuring performance, scalability, and reliability. Monitor and maintain deployed solutions, incorporating improvements based on feedback and real-world usage. Stay up to date with the latest advancements in GenAI, LLMs, and orchestration tools (e.g., LangChain, LlamaIndex). Write clean, maintainable, and well-documented code, and contribute to team-wide code reviews and best practices. Requirements 2-3 years of relevant Proven experience as an AI Developer. Proficiency in Python Good understanding multiple of Gen AI models (OpenAI, LLAMA2, Mistral) and ability to setup up local GPTs using ollama, lm studio etc. Experience with LLMs, RAG (Retrieval-Augmented Generation), and vector databases (e.g., FAISS, Pinecone). Multi agents frameworks to create workflows Langchain or similar tools like lamaindex, langgraph etc. Knowledge of Machine Learning frameworks, libraries, and tools. Excellent problem-solving skills and solution mindset Strong communication and teamwork skills. Ability to work independently and manage ones time effectively. Experience with any of cloud platforms (AWS, GCP, Azure). Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centres. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Disclaimer: - Accellor is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic
Posted 1 day ago
0 years
3 Lacs
Calcutta
Remote
What You'll Do Build AI/ML technology stacks from concept to production, including data pipelines, model training, and deployment. Develop and optimize Generative AI workflows, including prompt engineering, fine-tuning (LoRA, QLoRA), retrieval-augmented generation (RAG), and LLM-based applications. Work with Large Language Models (LLMs) such as Llama, Mistral, and GPT, ensuring efficient adaptation for various use cases. Design and implement AI-driven automation using agentic AI systems and orchestration frameworks like Autogen, LangGraph, and CrewAI. Leverage cloud AI infrastructure (AWS, Azure, GCP) for scalable deployment and performance tuning. Collaborate with cross-functional teams to deliver AI-driven solutions. Job Types: Part-time, Contractual / Temporary Contract length: 2 months Pay: From ₹25,000.00 per month Expected hours: 40 per week Schedule: Day shift Work Location: Remote
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 4 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios.
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Bizoforce is actively hiring an experienced AI/ML Engineer to join our innovative team and work on cutting-edge solutions in Generative AI, LLMs, and multi-agent architectures. This is a fully remote role based in India, ideal for professionals passionate about AI innovation and real-world deployment. You’ll be contributing to advanced applications such as LLM-based tutoring systems, OCR-powered tools, AI content generators, and data-driven assistants in EdTech, enterprise, and healthcare domains. A clinical background is a plus but not required. Key Responsibilities: * Design, develop, and deploy scalable LLM-based systems, RAG pipelines, and Generative AI applications * Engineer structured prompts for reliable outputs using zero-shot, CoT, few-shot, and meta-prompting methods * Build and integrate multi-modal AI models (text, vision, OCR, etc.) using frameworks like Crew AI, LangGraph * Develop and manage backend systems using Python (FastAPI/Flask), with Redis, PostgreSQL, MongoDB * Collaborate with cross-functional teams for full ML lifecycle (data, model, API, deployment, optimization) * Use tools like Docker, GitHub Actions (CI/CD), and deploy on AWS, Azure, or GCP environments * Participate in code reviews, testing, and MLops workflows to ensure high-quality, scalable output Required Skills * 5+ years of experience in AI/ML engineering with strong backend development skills * Expert-level Python programming and experience with FastAPI or Flask * Hands-on experience with LLMs (GPT-4, LLaMA, Claude, Phi3), Generative AI, and Prompt Engineering * Familiarity with frameworks such as LangChain, LangGraph, Crew AI, Smol Agents * Strong understanding of RAG (Retrieval-Augmented Generation), multi-agent AI systems * Experience in Computer Vision (OpenCV, YOLOv8), OCR (DocTR, TrOCR, Mistral OCR) * Vector DBs: Milvus, FAISS, RedisVector; Databases: MongoDB, PostgreSQL * Cloud: AWS (S3, EC2, SageMaker), Azure, GCP; Tools: Docker, Git, GitHub Actions * Bonus: Experience in EdTech or clinical AI solutions (not mandatory)
Posted 1 day ago
8.0 years
0 Lacs
Chandigarh, India
On-site
Job Description: 5–8 years of backend or full-stack development experience, with at least 3 years in Generative and Agentic AI. Strong expertise in building APIs using REST, GraphQL, and gRPC, with a focus on performance, versioning, and security. Proficiency in Python (preferred), with additional experience in TypeScript/Node.js, Go, or Java.In-depth knowledge of LLM integration and orchestration (OpenAI, Claude, Gemini, Mistral, LLaMA, etc.). Strong experience with frameworks such as LangChain, LlamaIndex, CrewAI, and Autogen. Familiarity with vector search, semantic memory, and retrieval-based augmentation using tools like FAISS or Qdrant. Solid understanding of cloud infrastructure (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes).
Posted 1 day ago
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We are looking for a highly motivated and skilled Generative AI (GenAI) Developer to join our dynamic team. You will be responsible for building and deploying GenAI solutions using large language models (LLMs) to address real-world business challenges. The role involves working with cross-functional teams, applying prompt engineering and fine-tuning techniques, and building scalable AI-driven applications. A strong foundation in machine learning, NLP, and a passion for emerging GenAI technologies is essential. Responsibilities Design, develop, and implement GenAI solutions using large language models (LLMs) to address specific business needs using Python Collaborate with stakeholders to identify opportunities for GenAI integration and translate requirements into scalable solutions Preprocess and analyze unstructured data (text, documents, etc.) for model training, fine-tuning, and evaluation Apply prompt engineering, fine-tuning, and RAG (Retrieval-Augmented Generation) techniques to optimize LLM outputs Deploy GenAI models and APIs into production environments, ensuring performance, scalability, and reliability Monitor and maintain deployed solutions, incorporating improvements based on feedback and real-world usage Stay up to date with the latest advancements in GenAI, LLMs, and orchestration tools (e.g., LangChain, LlamaIndex) Write clean, maintainable, and well-documented code, and contribute to team-wide code reviews and best practices Requirements 2-3 years of relevant Proven experience as an AI Developer Proficiency in Python Good understanding multiple of Gen AI models (OpenAI, LLAMA2, Mistral) and ability to setup up local GPTs using ollama, lm studio etc Experience with LLMs, RAG (Retrieval-Augmented Generation), and vector databases (e.g., FAISS, Pinecone) Multi agents frameworks to create workflows Langchain or similar tools like lamaindex, langgraph etc Knowledge of Machine Learning frameworks, libraries, and tools Excellent problem-solving skills and solution mindset Strong communication and teamwork skills Ability to work independently and manage ones time effectively Experience with any of cloud platforms (AWS, GCP, Azure) Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centres. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Disclaimer: - Accellor is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic
Posted 2 days ago
0.0 - 1.0 years
0 Lacs
India
Remote
Prime is a cutting-edge Edtech startup focused on building intelligent, autonomous AI agents that collaborate in multi-agent systems. We create agent-based architectures that enable autonomous decision-making and seamless cooperation to solve complex problems. Join us to help pioneer the future of decentralized AI! We are a fast-growing Edtech company driven by innovation, collaboration, and adaptability. Our mission is to deliver cutting-edge solutions that align with market demands and technical feasibility. Role Overview As a Multi-Agent Systems Architect at Prime Corporate, you will design and develop multi-agent architectures that empower AI agents to work together autonomously. You will be responsible for creating scalable, robust systems that enable agents to communicate, negotiate, and collaborate effectively, driving innovation in AI-driven automation. Key Responsibilities • Design and implement multi-agent system architectures that enable autonomous decision- making and collaboration among AI agents. • Develop agent-based frameworks that support task allocation, communication protocols, and coordination strategies. • Build and optimize agent communication layers using APIs, vector databases, and messaging protocols. • Integrate large language models (LLMs) and other AI components into agent workflows to enhance capabilities. • working directly with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, etc.). • Collaborate closely with product, engineering, and research teams to translate business requirements into technical solutions. • Ensure scalability, reliability, and fault tolerance of multi-agent systems in production environments. • Continuously research and apply the latest advances in multi-agent systems, decentralized AI, and autonomous agents. • Document architecture designs, workflows, and implementation details clearly for team collaboration and future reference. What We’re Looking For: • Practical experience designing and building multi-agent systems or agent-based architectures. • Proficiency in Python and familiarity with AI/ML frameworks (e.g., LangChain, AutoGen, HuggingFace). • Understanding of decentralized control, agent communication protocols, and emergent system design. • Experience with cloud platforms (AWS, GCP, Azure) and API integrations. • Strong problem-solving skills and ability to work independently in a remote startup environment. • No formal degree required - your skills, projects, and passion matter most. Location - 100% Onsite Experience - 0-1 year Compensation Structure : This role follows a structured pathway designed to prepare candidates for the responsibilities of a full-time position. • Pre-Qualification Internship (Mandatory): • Duration: 2 months • Stipend: ₹5,000/month • Objective: To evaluate foundational skills, work ethic, and cultural fit within the organization. • Internship (Mandatory) • Duration: 4 months • Stipend: ₹5,000–₹15,000/month (based on performance during the pre-qualification internship) Why Join Prime Corporate? • Work remotely with a passionate, innovative startup. • Contribute to pioneering multi-agent AI systems shaping the future of autonomous technology. • Grow your career from internship to full-time with competitive pay and equity opportunities. • Career Growth: Prove your potential and secure a full-time role with competitive compensation. Note: This is not a direct full-time job opportunity. Candidates must commit to our mandatory two- stage internship process. If you’re genuinely interested in joining us, we’d love to hear from you! Ready to build the future of autonomous AI? Apply now and join Prime Corporate mission! 1.Industry Software Development 2. Employment Type 3.Internship
Posted 2 days ago
19.0 years
0 Lacs
India
On-site
About Us Through Fintricity, our core consulting brand, has been at the forefront of technology and digital transformation for over 19 years, most recently with a focus on bringing innovation to leading brands through expertise in big data, analytics and technologies (such as Generative AI) that are transforming operations, products, services, business models and sectors. Pivoting to a AI services and venture firm, Fintricity is working on a range of exciting new projects and ventures to build new decentralised business models, technology platforms and disrupt and transform multiple industries. PLEASE DO NOT SEND A CONNECT ON LINKEDIN OR YOUR APPLICATION WILL BE IMMEDIATELY REJECTED. TAKE THE NEXT STEP: Are you truly collaborative? Succeeding at Fintricity means respecting, understanding and trusting colleagues and clients. Challenging others and being challenged in return. Being a strong communicator, passionate, entrepreneurial and innovative about what you do. Driving yourself forward, always wanting to do things the right way. Does that sound like you? Together. That’s how we do things. We offer a supportive, challenging and diverse working environment. We value your passion and commitment, and reward your performance. Keen to achieve the work-life agility that you desire? We're open to discussing how this could work for you (and us). About the Role We are looking for a Senior Python Engineer (Full Stack) to design and build the next generation of AgenticOps platforms, enabling scalable, observable, and safe deployment of autonomous AI agents and LLM workflows. This is a hands-on engineering role with broad responsibilities—from building backend services that orchestrate LLM pipelines, to designing web UIs that monitor, audit, and control agentic behavior. You’ll work at the intersection of Generative AI, DevOps, and full-stack development, helping to productize and operationalize intelligent agents across cloud environments. Responsibilities Backend Engineering (Python) Build modular services for prompt orchestration, vector DB interactions, agent memory/state stores, and retrieval pipelines. Create scalable APIs for agent task execution, human-in-the-loop control, and logging. AgenticOps & LLM Integration Develop tools to version, deploy, monitor, and rollback LLM agents and RAG-based workflows. Work with cloud-based LLM APIs (OpenAI, Claude, Gemini) and open-source models (LLaMA, Mistral, etc.) via LangChain or similar. DevOps & Infrastructure-as-Code Automate deployment pipelines (Docker, K8s, Terraform), observability stacks (Prometheus, Grafana), and secure API gateways. Own CI/CD and testing for both code and LLM-based behavior. Frontend Development (React, Nextjs or Vue preferred) Build intuitive UIs for monitoring agents in real time, visualizing traces, showing decision logs, version history, and alerts. Enable role-based access, interactive prompt testing, and behavior auditing. Data & Observability Create feedback collection pipelines, behavior traces, token usage dashboards, and incident response views. Integrate OpenTelemetry, custom metrics, and structured logging. Collaboration & Leadership Work cross-functionally with AI scientists, DevOps engineers, and product managers. Provide mentorship and guide best practices in full-stack design and Pythonic architecture. Requirements 5+ years in Python-based full-stack or backend development Solid experience with modern web frameworks (e.g., FastAPI, Django, Flask) Proficiency in frontend frameworks like React, Vue, or Svelte Strong DevOps/infra skills: Docker, Kubernetes, Terraform, GitHub Actions or similar Experience with LLM APIs and frameworks (LangChain, LlamaIndex, AutoGen, CrewAI) Familiarity with vector DBs (FAISS, Weaviate, Pinecone) and retrieval-based architectures Experience building monitoring/observability tooling (Grafana, Prometheus, ELK) Understanding of secure API design, authN/authZ, and agent sandboxing best practices Nice to Haves Experience building agent frameworks or workflow orchestration UIs Familiarity with LLMOps and Responsible AI patterns (prompt auditing, guardrails, evals) Contributions to open-source infra or AI tooling Experience with serverless (e.g., AWS Lambda, Google Cloud Functions) What You’ll Get Opportunity to shape the agent infrastructure stack of the future High-impact role in a cross-disciplinary team at the cutting edge of LLMs and automation Access to GPU/LLM infra and modern development tools Competitive compensation, flexible work, and continuous learning budget
Posted 2 days ago
10.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We&aposre seeking a visionary Enterprise Architect to join our CTO Office and shape cross-portfolio solutions at the intersection of AI, Customer Experience (CX), Cybersecurity, and Digital Skilling technologies. Youll architect scalable, standardized solutions for global clients, govern complex deals, and collaborate with diverse stakeholders to translate business needs into future-ready technical strategies. As a trusted advisor, you will evangelize solution value, articulating how the right technology mix enables our customers to achieve strategic outcomes. At TeKnowledge , your work makes an impact from day one. We partner with organizations to deliver AI-First Expert Technology Services that drive meaningful impact in AI, Customer Experience, and Cybersecurity. We turn complexity into clarity and potential into progressin a place where people lead and tech empowers. Youll be part of a diverse and inclusive team where trust, teamwork, and shared success fuel everything we do. We push boundaries, using advanced technologies to solve complex challenges for clients around the world. Here, your work drives real change, and your ideas help shape the future of technology. We invest in you with top-tier training, mentorship, and career developmentensuring you stay ahead in an ever-evolving world. Why Youll Enjoy It Here: Be Part of Something Big A growing company where your contributions matter. Make an Immediate Impact Support groundbreaking technologies with real-world results. Work on Cutting-Edge Tech AI, cybersecurity, and next-gen digital solutions. Thrive in an Inclusive Team A culture built on trust, collaboration, and respect. We Care Integrity, empathy, and purpose guide every decision. Were looking for innovators, problem-solvers, and experts ready to drive change and grow with us. We Are TeKnowledge. Where People Lead and Tech Empowers. Responsibilities: Design enterprise-grade architectures integrating structured/unstructured data, analytics, and advanced AI models (GenAI, LLMs, cognitive services). Build scalable data pipelines and lake-centric architectures to power real-time analytics and machine learning. Architect multi-cloud AI/ML platforms using Azure, including deployment of LLMs (Azure OpenAI and open-source models like LLaMA, Mistral, Falcon). Define infrastructure, data, and app requirements to deploy LLMs in customer private data centers. Lead technical reviews for high-value deals, identifying risks and mitigation strategies. Design integrated solutions across AI, CX, Cybersecurity, and Tech Managed Services portfolios. Develop standard design patterns and reusable blueprints for repeatable, low-risk, and scalable solution delivery. Present architectural solutions to C-suite executives, aligning technical outcomes with business value and ROI. Collaborate with sales and pre-sales to scope complex opportunities and develop compelling proposals. Foster innovation across CTO, Sales, and Solution teams. Identify synergy across offerings (e.g., Microsoft Copilot + AI-first CX + Cybersecurity). Support product teams with market feedback and solution evolution. Define architectural best practices ensuring security, compliance, and scalability. Mentor delivery teams on frameworks and emerging tech adoption. Shape and execute the enterprise architecture strategy aligned with business goals. Champion digital transformation and technology innovation. Leverage expertise in Azure and Microsoft D365 to support solution architecture. Drive responsible AI adoption and ensure awareness of privacy, bias, and security in deployments. Ensure all solutions meet IT security and compliance standards. Collaborate with Legal and Procurement for contract negotiations and vendor performance. Lead, mentor, and build a high-performing, collaborative CTO team with a customer-first mindset. Qualifications: Education: Bachelor&aposs or Masters degree in Computer Science, Information Technology, Cybersecurity, or related field. Experience: 10+ years in enterprise architecture, with 5+ years in customer-facing roles. Certifications: Preferred TOGAF, Zachman, ITIL, CISSP, Azure certifications or equivalents. Proven experience architecting and delivering AI/ML platforms, data lakes, and intelligent applications at enterprise scale. Demonstrable experience deploying local LLMs in production environments, including integration with LangChain, databases, and private storage. Strong knowledge of enterprise architecture frameworks and multi-cloud platforms (with a focus on Azure). Ability to design and deliver end-to-end solutions including networks (voice and data), microservices, business applications, resilience, disaster recovery, and security. Understanding of On-Prem / Private Cloud workload migration to public or hybrid cloud environments. Commercial acumen with the ability to articulate the business value of cloud-based solutions to executive stakeholders. Strong problem-solving and critical thinking skills with a proactive, outcome-oriented mindset. Experience with cloud computing, data center technologies, virtualization, and enterprise-grade security policies/processes. Proficiency in AI/ML, cybersecurity frameworks, customer experience platforms, and Microsoft Cloud (Azure, M365, D365). Exceptional communication and storytelling abilities for both technical and non-technical audiences. Experience engaging with large enterprise clients across industries such as government, healthcare, banking & finance, travel, and manufacturing. Empowering Leadership and Innovation At TeKnowledge, we are committed to fostering a culture of inspiring leadership and innovation. Our core leadership competencies are integral to our success: Inspire: We prioritize creating an inclusive environment, leading with purpose, and acting with integrity and respect. Build: Our leaders own business growth, drive innovation, and continuously strive for excellence. Deliver: We focus on setting clear priorities, embracing agility and change, and fostering collaboration for growth. We are looking for talented individuals who embody these competencies, are ready to grow, and are eager to contribute to our dynamic team. If you are passionate about making a meaningful impact and excel in a collaborative, forward-thinking environment, we invite you to apply and help us shape the future. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Location: Remote Employment Type: Full-time About the Role We are looking for a Senior Machine Learning Engineer to lead the development and deployment of AI/ML models for our platforms. In this role, you will drive technical strategy and you will be responsible for designing and deploying intelligent systems ,mentor junior engineers, and collaborate with cross-functional teams to deliver scalable, production-grade ML solutions. Key Responsibilities Independently design, build, and deploy machine learning models for core use cases. Drive the end-to-end lifecycle of ML projects—from scoping and architecture to implementation, deployment, and performance tuning. Maintain a hands-on approach in all aspects of development—from data preprocessing and feature engineering to model training, evaluation, and optimization. Lead technical reviews, provide constructive feedback, and help grow the team’s skill sets through coaching and knowledge sharing. Provide technical leadership and mentorship to junior engineers and data scientists, fostering a collaborative and high-performing team culture. Drive ML initiatives from ideation through production, ensuring scalability, performance, and maintainability. Collaborate with cross-functional teams including product, engineering, and operations to integrate intelligent solutions into user-facing products. Establish and promote ML best practices, including reproducibility, version control, testing,MLOps, and data governance. Oversee and guide the creation of scalable and maintainable ML pipelines and infrastructures. Stay ahead of industry trends and guide the adoption of new tools and techniques where relevant. Evaluate and integrate cutting-edge tools, frameworks, and techniques in NLP, deep learning, and computer vision. Own the quality, fairness, and compliance of ML systems, especially in sensitive use cases like content filtering and moderation. Design and implement machine learning models for automated content moderation , including toxicity, hate speech, spam, and NSFW detection. Build and optimize personalized recommendation systems using collaborative filtering, content-based, and hybrid approaches. Develop and maintain embedding-based similarity search for recommending relevant content based on user behavior and content metadata. Fine-tune and apply LLMs for moderation and summarization , leveraging prompt engineering or adapter-based methods. Deploy real-time inference pipelines for immediate content filtering and user-personalized suggestions. Ensure content moderation models are explainable , auditable , and bias-mitigated to align with ethical AI practices. Hands-on experience in content recommendation systems (e.g., collaborative filtering, ranking models, embeddings). Experience with content moderation frameworks , such as Perspective API, OpenAI moderation endpoints, or custom NLP classifiers. Strong knowledge of transformer-based models for NLP , including experience with Hugging Face, BERT, RoBERTa, etc. Practical experience with LLMs (GPT, Claude, Mistral) and tools for LLM fine-tuning or prompt engineering for moderation tasks. Familiarity with vector databases (e.g., FAISS, Pinecone) for similarity search in recommendation systems. Deep understanding of model fairness, debiasing techniques , and AI safety in content moderation . Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, or a related field. 5+ years of hands-on experience in machine learning, NLP, or deep learning, with a track record of leading projects. Expertise in Python and machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, Hugging Face, etc. Strong background in recommendation systems, content moderation, or ranking algorithms. Experience with cloud platforms (AWS/GCP/Azure), distributed computing (Spark), and MLOps tools. Proven ability to lead complex ML projects and teams, delivering business value through intelligent systems. Excellent communication skills, with the ability to explain complex ML concepts to stakeholders. Experience with LLMs (GPT, Claude, Mistral) and fine-tuning for domain-specific tasks. Knowledge of reinforcement learning, graph ML, or multimodal systems. Previous experience building AI systems for content moderation, personalization, or recommendation in a high-scale platform. Strong awareness of ethical AI principles, fairness, bias mitigation, and responsible data usage. Contributions to open-source ML projects or published research.
Posted 2 days ago
4.0 - 6.0 years
10 - 18 Lacs
Chennai, Tamil Nadu, India
On-site
JD for Generative AI Specialist JD Junior / Senior Generative AI Specialist Year of experience Junior / Senior 4-6 Years or 6- 8 years Shift : 11AM - 8PM Location : Chennai. Mode : Work From Office Role -The Generative AI specialist should be building GenAI LLM model driven solution using State of the Art Models like ( OpenAI, Gemini, Claude) , Opensource Models ( llama, Mistral). Should have expertise in fine tuning and training models . Should have implemented projects with expertise on Agents, Tools and RAG solutions. Hands on expertise in integrating LLMs with VectorDB like chromadb, faiss, pinecone is required. Expertise in PEFT, Quantization of models is required. Experience in Tensorflow, Pytorch, Python, Hugging Face, Transformers is must. Expert in data preparation, analysis and hands one expertise in Deep Learning model development is preferred. Additional expertise in deploying models in AWS is desired but optional. Skills OpenAI, Gemini, LangChain, Transformers, Hugging Face, Python, Pytorch, Tensorflow, Vectordb( chromadb, faiss, pinecone) Project experience Atleast 1-2 live implementation of Generative AI driven solution implementation. Extensive experience in implementing chatbots, knowledge search and in NLP . Good expertise in implementing Machine learning and deep learning solutions for atleast 2 years. 4th Floor, Techno Park, 10, Rajiv Gandhi Salai, Customs Colony, Sakthi Nagar, Thoraipakkam, Chennai 600097 Skills: rag,nlp,aws,vectordb (chromadb, faiss, pinecone),claude,agents,tensorflow,langchain,chatbots,hugging face,analysis,transformers,python,chromadb,gemini,openai,deep learning,faiss,llms,opensource models,pytorch,genai llm,peft,machine learning,llama,vectordb,mistral,pinecone
Posted 2 days ago
2.0 - 3.0 years
6 - 10 Lacs
Gurgaon
On-site
We're Hiring: Machine Learning Engineer Location: Gurugram (Hybrid) Industry: Mobile Gaming | Real Money Gaming | AI-Powered Entertainment Experience: 2–3 Years Join Games Pro India Pvt. Ltd. and help build cutting-edge AI systems and LLM-native infrastructure for the future of gaming. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI . Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral ) for real-time gaming applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations , and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, game developers, product teams , and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications: 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases . Hands-on experience with LLM frameworks/tooling such as LangChain, Hugging Face, OpenAI APIs, Transformers . Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices . Strong grasp of ML/DL fundamentals and experience with tools like PyTorch or TensorFlow . Ability to communicate ideas clearly , iterate quickly, and thrive in a fast-paced, product- Interested candidate must share their resume on shruti.sharma@gamespro.in contact no- 8076 310 357 #Hiring #MachineLearning #LLM #AIinGaming #LangChain #HuggingFace #GamingJobs #MLJobs #GamesProIndia #TechCareers #GurugramJobs #DeepLearning #PythonJobs Job Type: Full-time Pay: ₹600,000.00 - ₹1,000,000.00 per year Schedule: Day shift Work Location: In person
Posted 2 days ago
0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
🚀 AI-FULL STACK INTERN → FUTURE TECH LEAD (MERN / PYTHON + DEVOPS) Location: Lucknow (Onsite) | Duration: 6 Months → Full-Time “For rebels who fine-tune Llama 3 before breakfast and argue about Kubernetes over chai. If deploying open-source models on Hetzner at 2 AM excites you—this is your battleground.” 💻 Your War Mission Build AI-powered business weapons that redefine industries: ⚔️ Deploy open-source giants : Llama 3, Mistral, Phi-3 — optimize for consultative salesbots, customer assistants, and predictive engines. ⚔️ Architect at scale : Melt cloud clusters (AWS/Hetzner/Runpod) with real-time RAG systems, then rebuild them cost-efficient. ⚔️ Lead like a hacker-general : Mentor squads, review PRs mid-deployment, and ship production-grade tools in 48-hour sprints. ⚔️ Bridge chaos to clarity : Turn founder visions into Python + React missiles — no red tape, just impact. ⚔️ Your Arsenal 🧑💻 Code Weapons Python (Flask, Django) Node.js / Express React / Next.js MongoDB / Postgres ☁️ Cloud & DevOps Gear AWS (Lambda, ECS) Hetzner Bare Metal Servers Runpod GPU Clusters Docker / Kubernetes CI/CD Pipelines 🧠 AI / ML Firepower OSS Models: Llama 3, Mistral, DeepSeek LangChain / LangGraph + custom RAG hacks HuggingFace Transformers Real-time inference tuning 🧠 Who You Are ✅ Code gladiator with 3+ real projects on GitHub (bonus if containers have escaped into prod). ✅ Cloud insurgent fluent in IaC (Infrastructure as Code) – Hetzner and Runpod are your playground. ✅ Model whisperer – you’ve fine-tuned, quantized, and deployed open weights in real battles. ✅ Startup DNA – problems are loot boxes, not blockers. Permission is for the weak. 💥 Why This Beats Corporate Internships 🔧 Tech Stack: MERN + Python + Open-source AI/DevOps fusion (rare combo!) 🚀 Real Impact: Your code goes live to clients – no “simulations” or shadow projects. 🧠 Full Autonomy: You’ll get access to GPU clusters + full architectural freedom. 📈 Growth Path: Fast-track to full-time with competitive compensation + equity. 💼 Culture: No red tape. Just shipping, solving, and high-fives. 🎯 The Deal Phase 1: Intern (0–6 Months) Fixed stipend (for the bold, not the comfy) Ship 2+ client-ready AI products (portfolio > pedigree) Master open-source model deployment at scale Phase 2: FTE (Post 6 Months) Competitive comp + meaningful equity Lead AI pods with cloud budget autonomy ⚡ Apply If You: Can optimize Llama 3 APIs on Hetzner while debugging K8s Believe open-source > closed models for real-world impact Treat “impossible deadlines” as power-ups Can start yesterday 📮 How to Apply Drop your GitHub link (show us your best OSS battle scars) Write a 1-sentence battle cry : “How I’d deploy Mixtral to crush customer support costs” Email us at: careers@foodnests.com Subject line: [OSS GLADIATOR] - {Your Name} - {Cloud War Story} “We don’t count years. We count models deployed at 3 AM.” (Top 10 GitHub profiles get early interviews) #HiringNow #AIInternship #FullStackIntern #OpenSourceAI #MERNStack #PythonDeveloper #DevOpsJobs #LangChain #Runpod #Kubernetes #GitHubHackers #StartupJobs #EngineeringGraduates #BTechLife #LifeAtStartup #NowHiring #HackAndLead #ProductMindset #FullStackLife #GPTDev #AIxEngineering #BuilderNotBystander #StartupTech #GrowthHack #NodejsJobs #PythonDev #AWSCloud #EngineeringLeadership #JaipurTech #MakeStuffReal
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 2 days ago
10.0 years
15 - 20 Lacs
Jaipur, Rajasthan, India
On-site
We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision
Posted 2 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: • Develop and implement machine learning models and algorithms. • Work closely with project stakeholders to understand requirements and translate them into deliverables. • Utilize statistical and machine learning techniques to analyze and interpret complex data sets. • Stay updated with the latest advancements in AI/ML technologies and methodologies. • Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: • Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. • Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: • Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python • Strong knowledge and experience in Generative AI/ LLM based development. • Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). • Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. • Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. • Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. • Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. • Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. • Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. • Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL • Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). • Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). • Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills • Knowledge and Experience in building knowledge graphs in production. • Understanding of multi-agent systems and their applications in complex problemsolving scenarios
Posted 2 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: AI Agent Platform Engineer Location: Noida Department: Product Engineering – RevAi Pro Role Overview You will be responsible for designing, developing, and scaling the AI Agent Framework that powers automation-first modules in RevAi Pro, such as Tell Me , Action Center , and intelligent AI agents . This role is critical to shaping the core foundation of how automation, enterprise search, and just-in-time execution work inside our platform. Key Responsibilities 1. Agent Architecture & Runtime Architect and implement the core orchestration engine for AI agents (event-driven/task-based) Manage agent lifecycle functions such as spawn, pause, escalate, and terminate Enable secure, real-time communication between agents, services, and workflows Integrate memory and retrieval systems using vector databases like Pinecone, Weaviate, or Qdrant 2. LLM Integration & Prompt Engineering Integrate LLM providers (OpenAI, Azure OpenAI, Anthropic, Mistral, etc.) into agent workflows Create modular prompt templates with retry/fallback mechanisms Implement chaining logic and dynamic tool use for agents using LangChain or LlamaIndex Develop reusable agent types such as Summarizer, Validator, Notifier, Planner, etc. 3. Backend API & Microservices Development Develop FastAPI-based microservices for agent orchestration and skill execution Create APIs to register agents, execute agent actions, and manage runtime memory Implement RBAC, rate limiting, and security protocols for multi-tenant deployments 4. Data Integration & Task Routing Build connectors to integrate structured (CRM, SQL) and unstructured data sources (email, docs, transcripts) Route incoming data streams to relevant agents based on workflow and business rules Support ingestion from tools like Salesforce, HubSpot, Gong, and Zoom 5. DevOps, Monitoring, and Scaling Deploy the agent platform using Docker and Kubernetes on Azure Implement Redis, Celery, or equivalent async task systems for agent task queues Set up observability to monitor agent usage, task success/failure, latency, and hallucination rates Create CI/CD pipelines for agent modules and prompt updates Ideal Candidate Profile 2–4 years of experience in backend engineering, ML engineering, or agent orchestration Strong command over Python (FastAPI, asyncio, Celery, SQLAlchemy) Experience with LangChain, LlamaIndex, Haystack, or other orchestration libraries Hands-on with OpenAI, Anthropic, or similar LLM APIs Comfortable with vector embeddings and semantic search systems Understanding of modern AI agent frameworks like AutoGen, CrewAI, Semantic Planner, or ReAct Familiarity with multi-tenant API security and SaaS architecture Bonus: Frontend collaboration experience to support UI for agents and dashboards Bonus: Familiarity with SaaS platforms in B2B domains like RevOps, CRM, or workflow automation What You’ll Gain Ownership of agent architecture inside a live enterprise-grade AI platform Opportunity to shape the future of AI-first business applications Collaboration with founders, product leaders, and early enterprise customers Competitive salary with potential ESOP First-mover engineering credit on one of the most advanced automation stacks in SaaS
Posted 3 days ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore North, India | Posted on 07/29/2025 Share job via email Share this job with your network Job Information Job Type Full time Date Opened 07/29/2025 Project Code PRJ000 Industry IT Services Work Experience 5- 10 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 About Us
Posted 3 days ago
0.0 - 3.0 years
12 - 24 Lacs
Chennai, Tamil Nadu
On-site
We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The mistral job market in India is thriving, with a growing demand for professionals skilled in this area. Mistral jobs are diverse and can range from software development to data analysis to project management. Job seekers looking to explore opportunities in this field have a wide array of options to choose from.
These cities are known for their vibrant tech ecosystem and are home to many companies actively hiring for mistral roles.
The salary range for mistral professionals in India varies based on experience and skill level. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the mistral field, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and eventually moving into management positions like Project Manager or IT Director.
In addition to mistral expertise, professionals in this field are often expected to have skills such as:
As you prepare for mistral job opportunities in India, remember to showcase your expertise, experience, and problem-solving skills during interviews. With the right preparation and confidence, you can land a rewarding career in this dynamic field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough