Jobs
Interviews

630 Mistral Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We&aposre seeking a visionary Enterprise Architect to join our CTO Office and shape cross-portfolio solutions at the intersection of AI, Customer Experience (CX), Cybersecurity, and Digital Skilling technologies. Youll architect scalable, standardized solutions for global clients, govern complex deals, and collaborate with diverse stakeholders to translate business needs into future-ready technical strategies. As a trusted advisor, you will evangelize solution value, articulating how the right technology mix enables our customers to achieve strategic outcomes. At TeKnowledge , your work makes an impact from day one. We partner with organizations to deliver AI-First Expert Technology Services that drive meaningful impact in AI, Customer Experience, and Cybersecurity. We turn complexity into clarity and potential into progressin a place where people lead and tech empowers. Youll be part of a diverse and inclusive team where trust, teamwork, and shared success fuel everything we do. We push boundaries, using advanced technologies to solve complex challenges for clients around the world. Here, your work drives real change, and your ideas help shape the future of technology. We invest in you with top-tier training, mentorship, and career developmentensuring you stay ahead in an ever-evolving world. Why Youll Enjoy It Here: Be Part of Something Big A growing company where your contributions matter. Make an Immediate Impact Support groundbreaking technologies with real-world results. Work on Cutting-Edge Tech AI, cybersecurity, and next-gen digital solutions. Thrive in an Inclusive Team A culture built on trust, collaboration, and respect. We Care Integrity, empathy, and purpose guide every decision. Were looking for innovators, problem-solvers, and experts ready to drive change and grow with us. We Are TeKnowledge. Where People Lead and Tech Empowers. Responsibilities: Design enterprise-grade architectures integrating structured/unstructured data, analytics, and advanced AI models (GenAI, LLMs, cognitive services). Build scalable data pipelines and lake-centric architectures to power real-time analytics and machine learning. Architect multi-cloud AI/ML platforms using Azure, including deployment of LLMs (Azure OpenAI and open-source models like LLaMA, Mistral, Falcon). Define infrastructure, data, and app requirements to deploy LLMs in customer private data centers. Lead technical reviews for high-value deals, identifying risks and mitigation strategies. Design integrated solutions across AI, CX, Cybersecurity, and Tech Managed Services portfolios. Develop standard design patterns and reusable blueprints for repeatable, low-risk, and scalable solution delivery. Present architectural solutions to C-suite executives, aligning technical outcomes with business value and ROI. Collaborate with sales and pre-sales to scope complex opportunities and develop compelling proposals. Foster innovation across CTO, Sales, and Solution teams. Identify synergy across offerings (e.g., Microsoft Copilot + AI-first CX + Cybersecurity). Support product teams with market feedback and solution evolution. Define architectural best practices ensuring security, compliance, and scalability. Mentor delivery teams on frameworks and emerging tech adoption. Shape and execute the enterprise architecture strategy aligned with business goals. Champion digital transformation and technology innovation. Leverage expertise in Azure and Microsoft D365 to support solution architecture. Drive responsible AI adoption and ensure awareness of privacy, bias, and security in deployments. Ensure all solutions meet IT security and compliance standards. Collaborate with Legal and Procurement for contract negotiations and vendor performance. Lead, mentor, and build a high-performing, collaborative CTO team with a customer-first mindset. Qualifications: Education: Bachelor&aposs or Masters degree in Computer Science, Information Technology, Cybersecurity, or related field. Experience: 10+ years in enterprise architecture, with 5+ years in customer-facing roles. Certifications: Preferred TOGAF, Zachman, ITIL, CISSP, Azure certifications or equivalents. Proven experience architecting and delivering AI/ML platforms, data lakes, and intelligent applications at enterprise scale. Demonstrable experience deploying local LLMs in production environments, including integration with LangChain, databases, and private storage. Strong knowledge of enterprise architecture frameworks and multi-cloud platforms (with a focus on Azure). Ability to design and deliver end-to-end solutions including networks (voice and data), microservices, business applications, resilience, disaster recovery, and security. Understanding of On-Prem / Private Cloud workload migration to public or hybrid cloud environments. Commercial acumen with the ability to articulate the business value of cloud-based solutions to executive stakeholders. Strong problem-solving and critical thinking skills with a proactive, outcome-oriented mindset. Experience with cloud computing, data center technologies, virtualization, and enterprise-grade security policies/processes. Proficiency in AI/ML, cybersecurity frameworks, customer experience platforms, and Microsoft Cloud (Azure, M365, D365). Exceptional communication and storytelling abilities for both technical and non-technical audiences. Experience engaging with large enterprise clients across industries such as government, healthcare, banking & finance, travel, and manufacturing. Empowering Leadership and Innovation At TeKnowledge, we are committed to fostering a culture of inspiring leadership and innovation. Our core leadership competencies are integral to our success: Inspire: We prioritize creating an inclusive environment, leading with purpose, and acting with integrity and respect. Build: Our leaders own business growth, drive innovation, and continuously strive for excellence. Deliver: We focus on setting clear priorities, embracing agility and change, and fostering collaboration for growth. We are looking for talented individuals who embody these competencies, are ready to grow, and are eager to contribute to our dynamic team. If you are passionate about making a meaningful impact and excel in a collaborative, forward-thinking environment, we invite you to apply and help us shape the future. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

Remote

Location: Remote Employment Type: Full-time About the Role We are looking for a Senior Machine Learning Engineer to lead the development and deployment of AI/ML models for our platforms. In this role, you will drive technical strategy and you will be responsible for designing and deploying intelligent systems ,mentor junior engineers, and collaborate with cross-functional teams to deliver scalable, production-grade ML solutions. Key Responsibilities Independently design, build, and deploy machine learning models for core use cases. Drive the end-to-end lifecycle of ML projects—from scoping and architecture to implementation, deployment, and performance tuning. Maintain a hands-on approach in all aspects of development—from data preprocessing and feature engineering to model training, evaluation, and optimization. Lead technical reviews, provide constructive feedback, and help grow the team’s skill sets through coaching and knowledge sharing. Provide technical leadership and mentorship to junior engineers and data scientists, fostering a collaborative and high-performing team culture. Drive ML initiatives from ideation through production, ensuring scalability, performance, and maintainability. Collaborate with cross-functional teams including product, engineering, and operations to integrate intelligent solutions into user-facing products. Establish and promote ML best practices, including reproducibility, version control, testing,MLOps, and data governance. Oversee and guide the creation of scalable and maintainable ML pipelines and infrastructures. Stay ahead of industry trends and guide the adoption of new tools and techniques where relevant. Evaluate and integrate cutting-edge tools, frameworks, and techniques in NLP, deep learning, and computer vision. Own the quality, fairness, and compliance of ML systems, especially in sensitive use cases like content filtering and moderation. Design and implement machine learning models for automated content moderation , including toxicity, hate speech, spam, and NSFW detection. Build and optimize personalized recommendation systems using collaborative filtering, content-based, and hybrid approaches. Develop and maintain embedding-based similarity search for recommending relevant content based on user behavior and content metadata. Fine-tune and apply LLMs for moderation and summarization , leveraging prompt engineering or adapter-based methods. Deploy real-time inference pipelines for immediate content filtering and user-personalized suggestions. Ensure content moderation models are explainable , auditable , and bias-mitigated to align with ethical AI practices. Hands-on experience in content recommendation systems (e.g., collaborative filtering, ranking models, embeddings). Experience with content moderation frameworks , such as Perspective API, OpenAI moderation endpoints, or custom NLP classifiers. Strong knowledge of transformer-based models for NLP , including experience with Hugging Face, BERT, RoBERTa, etc. Practical experience with LLMs (GPT, Claude, Mistral) and tools for LLM fine-tuning or prompt engineering for moderation tasks. Familiarity with vector databases (e.g., FAISS, Pinecone) for similarity search in recommendation systems. Deep understanding of model fairness, debiasing techniques , and AI safety in content moderation . Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, or a related field. 5+ years of hands-on experience in machine learning, NLP, or deep learning, with a track record of leading projects. Expertise in Python and machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, Hugging Face, etc. Strong background in recommendation systems, content moderation, or ranking algorithms. Experience with cloud platforms (AWS/GCP/Azure), distributed computing (Spark), and MLOps tools. Proven ability to lead complex ML projects and teams, delivering business value through intelligent systems. Excellent communication skills, with the ability to explain complex ML concepts to stakeholders. Experience with LLMs (GPT, Claude, Mistral) and fine-tuning for domain-specific tasks. Knowledge of reinforcement learning, graph ML, or multimodal systems. Previous experience building AI systems for content moderation, personalization, or recommendation in a high-scale platform. Strong awareness of ethical AI principles, fairness, bias mitigation, and responsible data usage. Contributions to open-source ML projects or published research.

Posted 3 days ago

Apply

4.0 - 6.0 years

10 - 18 Lacs

Chennai, Tamil Nadu, India

On-site

JD for Generative AI Specialist JD Junior / Senior Generative AI Specialist Year of experience Junior / Senior 4-6 Years or 6- 8 years Shift : 11AM - 8PM Location : Chennai. Mode : Work From Office Role -The Generative AI specialist should be building GenAI LLM model driven solution using State of the Art Models like ( OpenAI, Gemini, Claude) , Opensource Models ( llama, Mistral). Should have expertise in fine tuning and training models . Should have implemented projects with expertise on Agents, Tools and RAG solutions. Hands on expertise in integrating LLMs with VectorDB like chromadb, faiss, pinecone is required. Expertise in PEFT, Quantization of models is required. Experience in Tensorflow, Pytorch, Python, Hugging Face, Transformers is must. Expert in data preparation, analysis and hands one expertise in Deep Learning model development is preferred. Additional expertise in deploying models in AWS is desired but optional. Skills OpenAI, Gemini, LangChain, Transformers, Hugging Face, Python, Pytorch, Tensorflow, Vectordb( chromadb, faiss, pinecone) Project experience Atleast 1-2 live implementation of Generative AI driven solution implementation. Extensive experience in implementing chatbots, knowledge search and in NLP . Good expertise in implementing Machine learning and deep learning solutions for atleast 2 years. 4th Floor, Techno Park, 10, Rajiv Gandhi Salai, Customs Colony, Sakthi Nagar, Thoraipakkam, Chennai 600097 Skills: rag,nlp,aws,vectordb (chromadb, faiss, pinecone),claude,agents,tensorflow,langchain,chatbots,hugging face,analysis,transformers,python,chromadb,gemini,openai,deep learning,faiss,llms,opensource models,pytorch,genai llm,peft,machine learning,llama,vectordb,mistral,pinecone

Posted 3 days ago

Apply

2.0 - 3.0 years

6 - 10 Lacs

Gurgaon

On-site

We're Hiring: Machine Learning Engineer Location: Gurugram (Hybrid) Industry: Mobile Gaming | Real Money Gaming | AI-Powered Entertainment Experience: 2–3 Years Join Games Pro India Pvt. Ltd. and help build cutting-edge AI systems and LLM-native infrastructure for the future of gaming. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI . Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral ) for real-time gaming applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations , and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, game developers, product teams , and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications: 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases . Hands-on experience with LLM frameworks/tooling such as LangChain, Hugging Face, OpenAI APIs, Transformers . Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices . Strong grasp of ML/DL fundamentals and experience with tools like PyTorch or TensorFlow . Ability to communicate ideas clearly , iterate quickly, and thrive in a fast-paced, product- Interested candidate must share their resume on shruti.sharma@gamespro.in contact no- 8076 310 357 #Hiring #MachineLearning #LLM #AIinGaming #LangChain #HuggingFace #GamingJobs #MLJobs #GamesProIndia #TechCareers #GurugramJobs #DeepLearning #PythonJobs Job Type: Full-time Pay: ₹600,000.00 - ₹1,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

🚀 AI-FULL STACK INTERN → FUTURE TECH LEAD (MERN / PYTHON + DEVOPS) Location: Lucknow (Onsite) | Duration: 6 Months → Full-Time “For rebels who fine-tune Llama 3 before breakfast and argue about Kubernetes over chai. If deploying open-source models on Hetzner at 2 AM excites you—this is your battleground.” 💻 Your War Mission Build AI-powered business weapons that redefine industries: ⚔️ Deploy open-source giants : Llama 3, Mistral, Phi-3 — optimize for consultative salesbots, customer assistants, and predictive engines. ⚔️ Architect at scale : Melt cloud clusters (AWS/Hetzner/Runpod) with real-time RAG systems, then rebuild them cost-efficient. ⚔️ Lead like a hacker-general : Mentor squads, review PRs mid-deployment, and ship production-grade tools in 48-hour sprints. ⚔️ Bridge chaos to clarity : Turn founder visions into Python + React missiles — no red tape, just impact. ⚔️ Your Arsenal 🧑‍💻 Code Weapons Python (Flask, Django) Node.js / Express React / Next.js MongoDB / Postgres ☁️ Cloud & DevOps Gear AWS (Lambda, ECS) Hetzner Bare Metal Servers Runpod GPU Clusters Docker / Kubernetes CI/CD Pipelines 🧠 AI / ML Firepower OSS Models: Llama 3, Mistral, DeepSeek LangChain / LangGraph + custom RAG hacks HuggingFace Transformers Real-time inference tuning 🧠 Who You Are ✅ Code gladiator with 3+ real projects on GitHub (bonus if containers have escaped into prod). ✅ Cloud insurgent fluent in IaC (Infrastructure as Code) – Hetzner and Runpod are your playground. ✅ Model whisperer – you’ve fine-tuned, quantized, and deployed open weights in real battles. ✅ Startup DNA – problems are loot boxes, not blockers. Permission is for the weak. 💥 Why This Beats Corporate Internships 🔧 Tech Stack: MERN + Python + Open-source AI/DevOps fusion (rare combo!) 🚀 Real Impact: Your code goes live to clients – no “simulations” or shadow projects. 🧠 Full Autonomy: You’ll get access to GPU clusters + full architectural freedom. 📈 Growth Path: Fast-track to full-time with competitive compensation + equity. 💼 Culture: No red tape. Just shipping, solving, and high-fives. 🎯 The Deal Phase 1: Intern (0–6 Months) Fixed stipend (for the bold, not the comfy) Ship 2+ client-ready AI products (portfolio > pedigree) Master open-source model deployment at scale Phase 2: FTE (Post 6 Months) Competitive comp + meaningful equity Lead AI pods with cloud budget autonomy ⚡ Apply If You: Can optimize Llama 3 APIs on Hetzner while debugging K8s Believe open-source > closed models for real-world impact Treat “impossible deadlines” as power-ups Can start yesterday 📮 How to Apply Drop your GitHub link (show us your best OSS battle scars) Write a 1-sentence battle cry : “How I’d deploy Mixtral to crush customer support costs” Email us at: careers@foodnests.com Subject line: [OSS GLADIATOR] - {Your Name} - {Cloud War Story} “We don’t count years. We count models deployed at 3 AM.” (Top 10 GitHub profiles get early interviews) #HiringNow #AIInternship #FullStackIntern #OpenSourceAI #MERNStack #PythonDeveloper #DevOpsJobs #LangChain #Runpod #Kubernetes #GitHubHackers #StartupJobs #EngineeringGraduates #BTechLife #LifeAtStartup #NowHiring #HackAndLead #ProductMindset #FullStackLife #GPTDev #AIxEngineering #BuilderNotBystander #StartupTech #GrowthHack #NodejsJobs #PythonDev #AWSCloud #EngineeringLeadership #JaipurTech #MakeStuffReal

Posted 3 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.

Posted 3 days ago

Apply

10.0 years

15 - 20 Lacs

Jaipur, Rajasthan, India

On-site

We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑‍💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Key Responsibilities: • Develop and implement machine learning models and algorithms. • Work closely with project stakeholders to understand requirements and translate them into deliverables. • Utilize statistical and machine learning techniques to analyze and interpret complex data sets. • Stay updated with the latest advancements in AI/ML technologies and methodologies. • Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: • Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. • Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: • Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python • Strong knowledge and experience in Generative AI/ LLM based development. • Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). • Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. • Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. • Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. • Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. • Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. • Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. • Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL • Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). • Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). • Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills • Knowledge and Experience in building knowledge graphs in production. • Understanding of multi-agent systems and their applications in complex problemsolving scenarios

Posted 3 days ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: AI Agent Platform Engineer Location: Noida Department: Product Engineering – RevAi Pro Role Overview You will be responsible for designing, developing, and scaling the AI Agent Framework that powers automation-first modules in RevAi Pro, such as Tell Me , Action Center , and intelligent AI agents . This role is critical to shaping the core foundation of how automation, enterprise search, and just-in-time execution work inside our platform. Key Responsibilities 1. Agent Architecture & Runtime Architect and implement the core orchestration engine for AI agents (event-driven/task-based) Manage agent lifecycle functions such as spawn, pause, escalate, and terminate Enable secure, real-time communication between agents, services, and workflows Integrate memory and retrieval systems using vector databases like Pinecone, Weaviate, or Qdrant 2. LLM Integration & Prompt Engineering Integrate LLM providers (OpenAI, Azure OpenAI, Anthropic, Mistral, etc.) into agent workflows Create modular prompt templates with retry/fallback mechanisms Implement chaining logic and dynamic tool use for agents using LangChain or LlamaIndex Develop reusable agent types such as Summarizer, Validator, Notifier, Planner, etc. 3. Backend API & Microservices Development Develop FastAPI-based microservices for agent orchestration and skill execution Create APIs to register agents, execute agent actions, and manage runtime memory Implement RBAC, rate limiting, and security protocols for multi-tenant deployments 4. Data Integration & Task Routing Build connectors to integrate structured (CRM, SQL) and unstructured data sources (email, docs, transcripts) Route incoming data streams to relevant agents based on workflow and business rules Support ingestion from tools like Salesforce, HubSpot, Gong, and Zoom 5. DevOps, Monitoring, and Scaling Deploy the agent platform using Docker and Kubernetes on Azure Implement Redis, Celery, or equivalent async task systems for agent task queues Set up observability to monitor agent usage, task success/failure, latency, and hallucination rates Create CI/CD pipelines for agent modules and prompt updates Ideal Candidate Profile 2–4 years of experience in backend engineering, ML engineering, or agent orchestration Strong command over Python (FastAPI, asyncio, Celery, SQLAlchemy) Experience with LangChain, LlamaIndex, Haystack, or other orchestration libraries Hands-on with OpenAI, Anthropic, or similar LLM APIs Comfortable with vector embeddings and semantic search systems Understanding of modern AI agent frameworks like AutoGen, CrewAI, Semantic Planner, or ReAct Familiarity with multi-tenant API security and SaaS architecture Bonus: Frontend collaboration experience to support UI for agents and dashboards Bonus: Familiarity with SaaS platforms in B2B domains like RevOps, CRM, or workflow automation What You’ll Gain Ownership of agent architecture inside a live enterprise-grade AI platform Opportunity to shape the future of AI-first business applications Collaboration with founders, product leaders, and early enterprise customers Competitive salary with potential ESOP First-mover engineering credit on one of the most advanced automation stacks in SaaS

Posted 3 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bangalore North, India | Posted on 07/29/2025 Share job via email Share this job with your network Job Information Job Type Full time Date Opened 07/29/2025 Project Code PRJ000 Industry IT Services Work Experience 5- 10 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 About Us

We are a team of cloud enthusiasts, keen and spirited to make the latest cloud technologies work for you.
Rapyder is an agile, innovative company that makes Cloud work for you. With a young, passionate team and expertise in Cloud Computing Solutions, Big Data, Marketing & Commerce, DevOps, and Managed Services, Rapyder is the leading provider of Strategic Cloud Consulting. Solutions provided by Rapyder are seamless, secure, and scalable. With headquarters in Bangalore, sales & Support offices located in Delhi, and Mumbai, we ensure optimal technology solutions to reduce costs, streamline business processes and gain business advantages for our customers.
Budget 0 Job Description Position: AI/ML Solution Architect (GenAI Specialist) Team: ML/GenAI Team Number of Resources Needed: 1 Years of Experience: 3-5 Years Educational Background: BCA, MCA, B.Tech/B.E, M.Tech/ME Type: Full-time Expected Joining Date: ASAP Role Summary: We are looking for an AI/ML Solution Architect with expertise in Generative AI (GenAI) to help design, build, and deploy cutting-edge AI solutions. Whether working in the rapid-paced innovation cycle of a startup or scaling production-grade GenAI systems for enterprise clients, you will serve as a key enabler of value-driven, secure, and scalable AI deployments. Key Responsibilities: Collaborate with sales teams to understand customer requirements and provide expert guidance on ML/Generative AI solutions across a wide range of use cases: content generation, document automation, chatbots, virtual assistants, summarization, search, personalization, and more. Evaluate and integrate open-source LLMs (e.g., LLaMA, Mistral, Falcon), commercial APIs (e.g., GPT-4, Claude, Gemini), and cloud-native GenAI platforms (e.g., AWS Bedrock, Azure OpenAI). Design and deliver compelling Solution and SoW and demonstrations of our Generative AI offerings to both technical and non-technical audiences. Design Retrieval-Augmented Generation (RAG), prompt engineering strategies, vector database integrations, and model fine-tuning where required. Translate business objectives into technical roadmaps, collaborating closely with product managers, engineers, and data scientists. Create prototypes and proof-of-concepts (PoCs) to validate solution feasibility and performance. Provide technical mentorship, best practices, and governance guidance across teams and clients. Skills & Experience Required: Educational Background: Master’s/Bachelor’s degree in computer science, Engineering, or a related field. (e.g., BCA, MCA, B.Tech/B.E, M.Tech/ME) 3-5 years of experience in AI/ML development/solutioning, with at least 1-2 years in Generative AI/NLP applications. Strong command of transformers, LLMs, embeddings, and NLP methods. Proficiency with LangChain, LlamaIndex, Hugging Face Transformers, and cloud AI tools (SageMaker, Bedrock, Azure OpenAI, Vertex AI). Experience with vector databases like FAISS, Pinecone, or Weaviate. Familiarity with MLOps practices, including model deployment, monitoring, and retraining pipelines. Skilled in Python, with working knowledge of APIs, Docker, CI/CD, and RESTful services. Experience in building solutions in both agile startup environments and structured enterprise settings. Preferred/Bonus Skills: Certifications (e.g., AWS Machine Learning Specialty, Azure AI Engineer). Exposure to multimodal AI (text, image, audio/video) and Agentic AI. Experience with data privacy, responsible AI, and model interpretability frameworks. Familiarity with enterprise security, scalability, and governance standards. Soft Skills: Entrepreneurial mindset with a bias for action and rapid prototyping. Strong communication and stakeholder management skills. Comfortable navigating ambiguity in startups and structured processes in enterprises. Team player with a passion for continuous learning and AI innovation. What We Offer: The flexibility and creativity of a startup-style team with the impact and stability of enterprise-scale work. Opportunities to work with cutting-edge GenAI tools and frameworks. Collaborative environment with cross-functional tech and business teams. Career growth in a high-demand, high-impact AI/ML domain.

Posted 3 days ago

Apply

0.0 - 3.0 years

12 - 24 Lacs

Chennai, Tamil Nadu

On-site

We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 3 days ago

Apply

4.0 years

0 Lacs

India

Remote

At SpicyChat, we’re on a mission to build the best uncensored roleplaying agent in the world , and we’re looking for a passionate Data Scientist to join our team. Whether you’re early in your data science career or growing into a mid-senior role, this is a unique opportunity to work hands-on with state-of-the-art LLMs in a fast-paced, supportive environment. Role Overview We’re looking for a Data Scientist (Junior to Mid-Senior level) who will support our LLM projects across the full data pipeline—from building clean datasets and dashboards to fine-tuning models and supporting cross-functional collaboration. You’ll work closely with ML engineers, product teams, and data annotation teams to bring AI solutions to life. What You’ll Be Doing ETL and Data Pipeline Development: Design and implement data extraction, transformation, and loading (ETL) pipelines. Work with structured and unstructured data from various sources. Data Preparation: Clean, label, and organize datasets for training and evaluating LLMs. Collaborate with annotation teams to ensure high data quality. Model Fine-Tuning & Evaluation: Support the fine-tuning of LLMs for specific use cases. Assist in model evaluation, prompt engineering, and error analysis. Dashboarding & Reporting: Create and maintain internal dashboards to track data quality, model performance, and annotation progress. Automate reporting workflows to help stakeholders stay informed. Team Coordination & Collaboration: Communicate effectively with ML engineers, product managers, and data annotators. Ensure that data science deliverables align with product and business goals. Research & Learning: Stay current with developments in LLMs, fine-tuning techniques, and the AI ecosystem. Share insights with the team and suggest improvements based on new findings. Qualifications Required: 1–4 years of experience in a data science, ML, or analytics role. Proficient in Python and data science libraries (Pandas, NumPy, scikit-learn). Experience with SQL and data visualization tools (e.g., Streamlit, Dash, Tableau, or similar). Familiarity with machine learning workflows and working with large datasets. Strong communication and organizational skills. Bonus Points For: Experience fine-tuning or evaluating large language models (e.g., OpenAI, Hugging Face, LLaMA, Mistral, etc.). Knowledge of prompt engineering or generative AI techniques. Exposure to tools like Weights & Biases, Airflow, or cloud platforms (AWS, GCP, Azure). Previous work with cross-functional or remote teams. Why Join NextDay AI? 🌍 Remote-first: Work from anywhere in the world. ⏰ Flexible hours: Create a schedule that fits your life. 🌴 Unlimited leave: Take the time you need to rest and recharge. 🚀 Hands-on with LLMs: Get practical experience with cutting-edge AI systems. 🤝 Collaborative culture: Join a supportive, ambitious team working on real-world impact. 🌟 Mission-driven: A chance to be part of an exciting mission and an amazing team. Ready to join us in creating the ultimate uncensored roleplaying agent? Send us your resume along with some details on your coolest projects. We’re excited to see what you’ve been working on!

Posted 4 days ago

Apply

0.0 years

0 Lacs

Bengaluru

Work from Office

Role & responsibilities Assist in designing, training, and evaluating machine learning and deep learning models. Work on GenAI use cases such as text summarization, question answering, and prompt engineering. Build applications using LLMs (like OpenAI GPT, LLaMA, Mistral, Claude, or similar). Preprocess and manage large datasets for training and inference. Implement NLP pipelines using libraries like Hugging Face Transformers. Help integrate AI models into production-ready APIs or applications. Stay updated with advancements in GenAI, ML, and LLM frameworks. Preferred candidate profile Knowledge of vector databases (FAISS, Pinecone, etc.) LangChain or LlamaIndex (RAG pipelines) Experience with Kaggle competitions Awareness of ethical AI principles and model limitations Selection Process: Technical Assessment Python + ML/GenAI basics Technical Interview – Coding + Project Discussion Final Selection – Based on combined performance

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Job Title: AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents

Posted 4 days ago

Apply

1.0 years

2 - 6 Lacs

Ahmedabad

On-site

About the Role We are looking for a LLM (Large Language Models) Engineer to design, build, and optimize intelligent agents powered by Large Language Models (LLMs). You will work on cutting-edge AI applications , pre-train LLMs, fine-tune open-source models, integrate multi-agent systems, and deploy scalable solutions in production environments. Key Responsibilities – (Must Have) Develop and fine-tune LLM-based modesl and AI agents for automation, reasoning, and decision-making. Build multi-agent systems that coordinate tasks efficiently. Design prompt engineering, retrieval-augmented generation (RAG), and memory architectures . Optimize inference performance and reduce hallucinations in LLMs. Integrate LLMs with APIs, databases, and external tools for real-world applications . Implement reinforcement learning with human feedback (RLHF) and continual learning strategies. Collaborate with research and engineering teams to enhance model capabilities. Requirements 1+ years in AI/ML, with at least 1+ years in LLMs, or AI agents . Strong experience in Python, LangChain, LlamaIndex, Autogen, Hugging Face, etc. Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Hands-on experience in LLM deployments with strong inference capabilities using robust frameworks such as vLLM. building multi-modal RAG systems. Knowledge of vector databases (FAISS, Chroma) for retrieval-based systems. Experience with LLM fine-tuning, downscaling, prompt engineering, and model inference optimization . Familiarity with multi-agent systems, cognitive architectures, or autonomous AI workflows . Expertise in cloud platforms (AWS, GCP, Azure) and scalable AI deployments . Strong problem-solving and debugging skills. Nice to Have Contributions to AI research, GitHub projects, or open-source communities . Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Knowledge of Neural Symbolic AI, AutoGPT, BabyAGI, or similar frameworks . Job Type: Full-time Pay: ₹23,671.07 - ₹55,229.87 per month Benefits: Paid sick time Provident Fund Work Location: In person

Posted 4 days ago

Apply

12.0 years

0 Lacs

India

Remote

We're building a next generation agentic AI platform and need a talented, self-starting, experienced Senior developer. The role will require advanced AI manipulation (including prompt engineering) and model tuning. You will be part of a small multi-disciplinary team, solving problems that have never been solved before. Skills And Experience Excellent command of English reading, writing, and speaking Deep experience working with LLMs (OpenAI, Claude, Mistral, emphasis on open-source (smaller) models, etc.) Advanced prompt engineering across zero-, few-shot, and chain-of-thought paradigms Experience with Retrieval-Augmented Generation (RAG) pipelines Experience with model fine-tuning techniques, including LoRA, QLoRA, adapters, etc. Strong understanding of Agentic AI architectures, including multi-agent coordination, tool use, and planning Experience with Mixture of Experts (MoE) models and how to route, train, and serve them efficiently Experience with LLM tooling (LangChain, LlamaIndex, Transformers, etc.) Must be able to work UK office hours Desirable Skills Experience with TypeScript Experience with vector databases, embeddings, and hybrid search Experience with model hosting, inference infrastructure, and cloud-based deployment Experience with open-source LLMs and fine-tuning stacks (Axolotl, vLLM, etc.) Benefits “Yoodoo” - You can go offline and spend 2 hours of your work time every week on any activity that serves your mental or physical health; You will get 25 days paid Annual Leave per annum; Additional paid day off for festivals, your birthday, any celebration; Paid maternity leave; Paid paternity leave; Paid sick leave; Remote working; Union - who provide assistance and service to everyone at VDP Salary - Competitive market rate About VDP 🌟Founded 12 years ago in the UK, our company has a worldwide team that offers technology services. 👔 We help various clients, including those without their own developers. For them, we build entire teams to develop software from the ground up. We also assist clients who already have platforms and development teams but need help speeding up their work or starting new projects. 🛠️ Our skills include a broad range of technologies such as C#, Python, C++, Angular, React, Node.js, and both native and hybrid app development, as well as AI. Since we started, we have always worked remotely, so we don’t have an office, and everyone works from home. 🌲 We are a certified B Corp and have proven we are a force for good for our Employees, our Customers, are suppliers our Communities and the Environment. ❤️ 10% of what we make goes to our VDP Trust, helping to support projects for Women and Girls around the world. Through our work, our values and our actions, we aim to be a force for good, not just for ourselves and our customers, but for our communities and the world at large; We have four values that we all live by. These are Collaborate, Compassion, Honesty and Inclusion;

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Apply Now Job Title AWS Bedrock Developer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Technical Products and Services team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. We are seeking a highly skilled AWS Bedrock Developer to design, develop, and deploy generative AI applications using Amazon Bedrock. The ideal candidate will have hands-on experience with AWS-native services, prompt engineering, and building intelligent, scalable AI solutions. 🔧 Key Responsibilities Design and implement generative AI solutions using Amazon Bedrock and foundational models (e.g., Anthropic Claude, Mistral, Meta Llama). Develop and optimize prompts for various use cases including chatbots, summarization, content generation, and more. Integrate Bedrock with other AWS services such as Lambda, S3, API Gateway, and SageMaker. Build and deploy scalable, secure, and cost-effective AI applications. Collaborate with data scientists, ML engineers, and product teams to define requirements and deliver solutions. Monitor and optimize performance, cost, and reliability of deployed AI services. Stay updated with the latest advancements in generative AI and AWS services. 🧪 Required Skills & Experience 5+ years of experience in cloud development, with at least 1 year working with AWS Bedrock. Strong programming skills in Java Spring boot Experience with prompt engineering and fine-tuning LLMs. Familiarity with AWS services: Lambda, S3, IAM, API Gateway, CloudWatch, etc. Understanding of RESTful APIs and microservices architecture. Excellent problem-solving and communication skills. Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents Apply Now

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hi, I’m reaching out regarding a Hybrid Work IT Support Engineer - Level 3 opportunity with one of our companies based in Bengaluru. Karnataka . Please let me know if you’re interested in discussing this further. Thank you! Title: IT Support Engineer - Level 3 Job Type: Full Time Employment Type of Work: Hybrid Work Company Location: Bengaluru. Karnataka Work Hours: 1st Shift & 2nd Shift Description: Majority of the work demand supporting Customer with Production Issues during automation of the Network deployment. Challenging roles of real time debugging of the issue and quick resolution for networks automation Job Responsibilities Actively involved in troubleshooting production defects in customer environment Escalate to next level support based on the priority of the issue. Understand customer demands and communicate effectively on the nature of the issue and possible service impact. Competencies Required: Overall 5+ years of relevant IT experience, in area of Software development and test Automation. Should have end to end working experience in Automation development in cloud native environment. Automation using Python, JavaScript. Awareness of SQL is an additional advantage. Networking knowledge is added advantage. Knowledge of Automation using TOSCA, mistral engine. OpenShift, Docker, Helm and container technology experience is essential Experience troubleshooting cloud-native applications Prior experience working in Linux environment is must.

Posted 4 days ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Senior Backend & DevOps Engineer (AI-Integrated Products) Location: Remote Employment Type: Full Time/Freelance/Part Time Work Hours: Flexibile Work Timing About Us: We’re building AI-powered products that seamlessly integrate technology into everyday routines. We're now looking for a Senior Backend & DevOps Engineer who can help us scale Mobile Apps globally and architect the backend for it, while owning infrastructure, performance, and AI integrations. Responsibilities: Backend Engineering: Own and scale the backend architecture (Node.js/Express or similar) for Mobile Apps Build robust, well-documented, and performant APIs Implement user management, session handling, usage tracking, and analytics Integrate 3rd-party services including OpenAI, Whisper, and other LLMs Optimize app-server communication and caching for global scale DevOps & Infrastructure: Maintain and scale AWS/GCP infrastructure (EC2, RDS, S3, CloudFront/CDN, etc) Set up CI/CD pipelines (GitHub Actions preferred) for smooth deployment Monitor performance, set up alerts, and handle autoscaling across regions Manage cost-effective global infra scaling and ensure low-latency access Handle security (IAM, secrets management, HTTPS, CORS policies, etc) AI & Model Integration: Integrate LLMs like GPT-4, Mistral, Mixtral, and open-source models Support fine-tuning, inference pipelines, and embeddings Build offline inference support and manage transcription workflows (Whisper, etc) Set up and optimize vector DBs like Qdrant, Weaviate, Pinecone Requirements: 4+ years of backend experience with Node.js, Python, or Go 2+ years of DevOps experience with AWS/GCP/Azure, Docker, and CI/CD Experience deploying and managing AI/ML pipelines, especially LLMs and Whisper Familiarity with vector databases, embeddings, and offline inference Strong understanding of performance optimization, scalability, and observability Clear communication skills and a proactive mindset Bonus Skills: Experience working on mobile-first apps (React Native backend knowledge is a plus) Familiarity with Firebase, Vercel, Railway, or similar platforms Knowledge of data privacy, GDPR, and offline sync strategies Past work on productivity, journaling, or health/fitness apps Experience self-hosting LLMs or optimizing AI pipelines on edge/cloud Please share(Optional): A brief intro about you and your experience Links to your GitHub/portfolio or relevant projects Resume or LinkedIn profile Any AI/infra-heavy work you’re particularly proud of Contact : subham@binaryvlue.com

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role you will be Design, develop, and maintain scalable backend services using Python and frameworks like Django, Flask, or Fast API Build responsive and interactive UIs using React.js, Vue.js, or Angular. Develop and consume RESTful APIs, and contribute to API contract definitions, including Gen AI/Open AI integration where applicable. Collaborate closely with UI/UX designers, product managers, and fellow engineers to translate business requirements into technical solutions. Ensure performance, security, and responsiveness of web applications across platforms and devices. Write clean, modular, and testable code following industry best practices and participate in code reviews. Architect, build, and maintain distributed systems and microservices, ensuring maintainability and scalability. Implement and manage CI/CD pipelines using tools such as Docker, Kubernetes (HELM), Jenkins, or Ansible. Use observability tools such as Grafana and Prometheus tools to monitor application performance and troubleshoot production issues. Proficient in RAG (Retrieval-Augmented Generation) techniques with hands-on experience in benchmarking models, selecting the most suitable model for specific use cases, and working with LLM (Large Language Model) agents. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in full-stack development. Strong proficiency in Python, with hands-on experience using Django, Flask, or FastAPI. Solid front-end development skills in HTML5, CSS3, and JavaScript, with working knowledge of frameworks like React, Vue, or Angular. Proven experience designing and implementing RESTful APIs and integrating third-party APIs/services. Experience working with Kubernetes, Docker, Jenkins, and Ansible for containerization and deployment. Familiarity with both SQL and NoSQL databases, such as PostgreSQL, MySQL, or MongoDB. Comfortable with unit testing, debugging, and using logging tools for observability. Experience with monitoring tools such as Grafana and Prometheus utilities. Proven experience with OpenAI (GPT-4/GPT-3.5), Claude, Gemini, Mistral, or other commercial/open-source LLMs. Basic experience in data handling, including managing, processing, and integrating data within full-stack applications to ensure seamless backend and frontend functionality You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 4 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title:Senior AI/ML Engineer Experience: 8+ Years Location:Bangalore (On-site) Mandatory Skills: LLM & Prompt Engineering: Strong expertise in Prompt Engineering techniques and strategies In-depth understanding of how Large Language Models (LLMs) work, including control over hyperparameters such as temperature, top_p, etc. Experience in LLM fine-tuning , prompt optimization , and zero-/few-shot learning Agentic AI Frameworks: Hands-on experience with agent-based frameworks like: LangChain LangGraph Crew AI Retrieval-Augmented Generation (RAG): Implementation experience in RAG pipelines Experience in integrating vector databases with LLMs for contextual augmentation LLM Evaluation & Observability: Knowledge of techniques and tools for LLM performance evaluation Familiarity with LLM observability platforms and metrics monitoring Ability to define and track quality benchmarks like accuracy, coherence, hallucination rate, etc. Programming & Deployment: Strong programming skills in Python Experience deploying AI/ML models or LLM-based systems on at least one major cloud platform (AWS, GCP, or Azure) Preferred Skills (Nice to Have): Experience working with OpenAI, Anthropic, Cohere , or open-source LLMs (LLaMA, Mistral, Falcon, etc.) Knowledge of Docker , Kubernetes , MLflow , or other ML Ops tools Experience with embedding models , vector databases (like Pinecone, FAISS, Weaviate) Familiarity with transformer architecture and fine-tuning techniques Responsibilities: Design, build, and optimize LLM-powered applications Develop and maintain prompt strategies tailored to business use-cases Architect and implement agentic AI workflows using modern frameworks Build and monitor RAG pipelines for improved information retrieval Establish processes for evaluating and monitoring LLM behavior in production Collaborate with cross-functional teams including Product, Data, and DevOps Ensure scalable and secure deployment of models to production Soft Skills: Strong problem-solving and analytical thinking Excellent communication and documentation skills Passion for staying updated with advancements in GenAI and LLM ecosystems

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

Remote

About the Role We are seeking a hands-on AI/ML Engineer with deep expertise in Retrieval-Augmented Generation (RAG) agents , Small Language Model (SLM) fine-tuning , and custom dataset workflows . You'll work closely with our AI research and product teams to build production-grade models, deploy APIs, and enable next-gen AI-powered experiences. Key Responsibilities Design and build RAG-based solutions using vector databases and semantic search. Fine-tune open-source SLMs (e.g., Mistral, LLaMA, Phi, etc.) on custom datasets. Develop robust training and evaluation pipelines with reproducibility. Create and expose REST APIs for model inference using FastAPI . Build lightweight frontends or internal demos with Streamlit for rapid validation. Analyze model performance and iterate quickly on experiments. Document processes and contribute to knowledge-sharing within the team. Must-Have Skills 3–5 years of experience in applied ML/AI engineering roles. Expert in Python and common AI frameworks (Transformers, PyTorch/TensorFlow). Deep understanding of RAG architecture, vector stores (FAISS, Pinecone, Weaviate). Experience with fine-tuning transformer models and instruction-tuned SLMs. Proficient with FastAPI for backend API deployment and Streamlit for prototyping. Knowledge of tokenization, embeddings, training loops, and evaluation metrics. Nice to Have Familiarity with LangChain, Hugging Face ecosystem, and OpenAI APIs. Experience with Docker, GitHub Actions, and cloud model deployment (AWS/GCP/Azure). Exposure to experiment tracking tools like MLFlow, Weights & Biases. What We Offer Build core tech for next-gen AI products with real-world impact. Autonomy and ownership in shaping AI components from research to production. Competitive salary, flexible remote work policy, and a growth-driven environment.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Bangalore North Rural, Karnataka, India

On-site

Position title: Data Science & AI Engineer Experience:6-8 years Budget:25-28 LPA Notice period : 1 Month -Immediate joiner Skills set : MLOPS,Gen AI,SLM,LLM,Python , Data science , ai engineer Roles and responsibilities: Develop, implement, and optimize machine learning models and AI algorithms to solve complex business problems. Design, build, and fine-tune AI models, particularly focusing on LLMs and SLMs, using state-of-the-art techniques and architectures Apply advanced techniques in prompt engineering, model fine-tuning, and optimization to tailor models for specific business needs. Deploy and manage machine learning models and pipelines on cloud platforms (AWS, GCP, Azure, etc.). Work closely with clients to understand their data and AI needs and provide tailored solutions. Collaborate with cross-functional teams to integrate AI solutions into broader software architectures. Mentor junior team members and provide guidance in implementing best practices in data science and AI development. Stay up-to-date with the latest trends and advancements in data science, AI, and cloud technologies. Requirement: 5+ years of experience in data science, machine learning, and AI technologies. Proven experience working with cloud platforms such as Google Cloud, Microsoft Azure, or AWS. Expertise in programming languages such as Python, R, Julia, and AI frameworks like TensorFlow, PyTorch, Scikit-learn, Hugging face Transformers. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Tableau) Solid understanding of data engineering concepts including ETL, data pipelines, and databases (SQL, NoSQL). Experience with MLOps practices and deployment of models in production environments. Familiarity with NLP (Natural Language Processing) tasks and working with large-scale datasets. Hands-on experience with generative AI models like GPT, Gemini, Claude, Mistral etc. Client-facing experience with strong communication skills to manage and engage stakeholders. Strong problem-solving skills and analytical mindset. Ability to work independently and as part of a team and mentor and provide technical leadership to junior team members

Posted 5 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Senior AIML Engineer Relevant Experience: 8+ years Location: Bangalore/Pune Employment Type: Full-time Role Overview: We are looking for a senior Generative AI Engineer is responsible for designing, developing, and implementing advanced AI models and solutions, particularly focusing on generative AI technologies such as large language models (LLMs), NLP, and computer vision. This role involves collaborating with cross-functional teams to translate business needs into innovative AI-driven products and services, driving AI initiatives, and ensuring seamless integration and deployment of AI systems. Key Responsibilities: Collaborate with executives, product managers, and stakeholders to prioritize AI initiatives Lead the design, development, and deployment of generative AI models and systems. Make architectural decisions and ensure scalability, efficiency, and robustness of AI solutions. Manage the full AI project lifecycle from research and prototyping to production deployment. Mentor and guide junior engineers and contribute to team skill development. Drive innovation by incorporating cutting-edge AI research and best practices. Qualifications: Minimum 8 years of experience in data science, machine learning, and generative AI. Strong foundation in Machine Learning & Deep Learning with expertise in neural networks, optimization techniques and model evaluation. Having led projects with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps. Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models. Preferred Attributes: Entrepreneurial mindset with a hands-on approach to development. Experience in AI solution architecture and optimization techniques. Ability to work in a fast-paced, business facing environment and lead AI initiatives Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 5 days ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Career Lab Consulting Pvt. Ltd. (CLC) is hiring a Senior Agentic AI Developer to architect and build the world’s first autonomous AI-powered marketing system — replacing an entire marketing department using a multi-agent system. This is a zero-to-one leadership role with massive ownership and deep-tech innovation at its core. Position : Senior Agentic AI Developer Location : Remote (India Preferred | Hybrid Optional) Type : Full-Time | High Ownership Role CTC : ₹12–15 LPA (Fixed) + Performance Bonuses Apply : hr@careerlabconsulting.com WhatsApp HR Mamta: +91-8700827753 Your Mission You will single-handedly design and deploy a cost-efficient, tool-lite, fully autonomous Agentic AI system to run the marketing ops of InternX–AI — India’s flagship Agentic AI Career Accelerator. You’ll lead architecture, development, scaling, and mentor interns , enabling the system to operate without human intervention across: Content Creation CRM & Email Campaigns Paid Ads Management SEO Workflows Social Media Posting Lead Nurturing Analytics & Reporting What You`ll Build (in 90-120 Days) Multi-agent architecture with self-healing, planning, and memory Lightweight, cost-optimized stack (OSS > SaaS) End-to-end autonomous marketing workflows Trained an intern team to support and scale the AI system Interactive dashboards for business teams to control & review outputs Tech Stack You`ll Work • Python (async, FastAPI, LangGraph) • LLMs (OpenAI, Claude, LLaMA, Mistral, etc.) • Agent Frameworks (LangGraph, CrewAI, CAMEL, OpenAgents) • RAG + Vector DBs (FAISS, Chroma, Weaviate) • Prompt chaining, memory modules, toolformer logic • DevOps: Docker, Git, CI/CD • Web scraping, headless browsers, API agents Who We are looking For Zero-to-One Builder : You love uncharted territory Agentic Thinker : Not just prompt engineering — full autonomy loops Intern Mentor : You’re a leader who uplifts junior talent Cost Hacker : You always think ROI, not just API Future CTO DNA : You're thinking 3 years ahead, not 3 weeks Additional Skills • Agent simulations (AutoGPT/CAMEL-style dialogue agents) • Custom vector memory systems • RLHF or small model fine-tuning • Agentic learning from feedback loops Mentorship Duties You’ll lead and upskill a small batch of interns across: • LLM workflows and prompt chains • Python automation and agent-based systems • AI design logic for scalable execution • Real-time marketing agent delivery What You`ll Get Remote-first freedom with async flexibility Ownership of the first-of-its-kind Agentic AI platform Exposure to InternX–AI: India's flagship AI career tech brand Intern + DevOps support to let you focus on building Outcome-oriented culture with innovation at the core How to Apply Send us: • Your resume (or LinkedIn profile link) • GitHub/code samples or project write-ups (Notion-style preferred) • A short note: "An autonomous AI system you've built or want to build" Email: hr@careerlabconsulting.com WhatsApp HR Mamta: +91-8700827753 Subject Line: Senior Agentic AI Developer – Marketing Autopilot Project Not A Traditional Role This is not prompt engineering. This is not marketing automation. This is autonomous business execution — built from scratch. if you're ready to build the future, join us.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies