Jobs
Interviews

649 Mistral Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨‍💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑‍💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.

Posted 1 week ago

Apply

12.0 years

3 - 7 Lacs

Hyderābād

On-site

Enterprise Architect (LLMs, GenAI, AI/ML) Experience: 12+ years Location: India Note: This role requires flexibility to travel or relocate to Abu Dhabi (UAE) for onsite client requirements (typically 2 to 3 months at a time). About the Role NorthBay Solutions seeks an Enterprise Architect with proven GenAI leadership, strong AI/ML foundation, and solid enterprise architecture background. We want seasoned professionals with deployment experience across on-premises, cloud, and hybrid environments. Core Responsibilities Design end-to-end AI-powered enterprise solutions integrating traditional systems with AI/ML Lead 8–12 engineers across full-stack, DevOps, and AI/ML specializations in Agile environments Drive technical decisions spanning infrastructure, databases, APIs, microservices, and AI components Technical Requirements GenAI Leadership (3+ Years) – PRIMARY Expert-level LLM experience (LLAMA, Mistral, GPT) including fine-tuning and deployment Agentic AI, MultiAgent systems, Agentic RAG implementations Vector Databases, LangChain, Langraph, and modern GenAI toolchains Advanced prompt engineering and Chain-of-Thought techniques 3+ production GenAI applications successfully implemented AI/ML Foundation (4–5 Years) Hands-on AI/ML experience building production systems with Python, TensorFlow, PyTorch ML model lifecycle management from development to deployment and monitoring Integration of AI/ML models with existing enterprise architectures Enterprise Architecture & Deployment (8+ Years) Full-stack development with MERN stack or equivalent Deployment experience across on-premises, cloud, and hybrid environments Kubernetes expertise for container orchestration and deployment Database proficiency: SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) API development: RESTful services, GraphQL, microservices architecture DevOps experience: Docker, CI/CD pipelines, infrastructure automation Cloud & Infrastructure Strong cloud experience (AWS/Azure/GCP) with ML services AWS: SageMaker, Bedrock, Lambda, API Gateway or equivalent Hybrid architecture design combining on-prem, cloud, and AI/ML systems Proven Delivery 7+ complex projects delivered across enterprise systems and AI/ML solutions Team leadership experience managing diverse technical teams Requirements (Good to have) AWS/Azure/GCP certifications (Solutions Architect + ML Specialty preferred) Strong communication skills bridging business and technical requirements Agile/Scrum leadership experience with measurable team performance improvements Ideal Candidate Priority expertise: GenAI Leadership AI/ML Foundation Enterprise Architecture with deployment experience across on-premises, cloud, and hybrid environments. h3qtfPMab1

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

What You’ll Do As an AI Engineer at Wednesday, you’ll design and build production-ready AI systems using state-of-the-art language models, vector databases, and modern AI frameworks. You’ll own the full lifecycle of AI features — from prototyping and prompt engineering to deployment, monitoring, and optimization. You’ll work closely with product and engineering teams to ensure our AI solutions deliver real business value at scale. Your Responsibilities System Architecture & Development Architect and implement AI applications leveraging transformer-based LLMs, embeddings, and vector similarity techniques. Build modular, maintainable codebases using Python and AI frameworks. Retrieval-Augmented Generation & Semantic Search Design and deploy RAG systems with vector databases such as Pinecone, Weaviate, or Chroma to power intelligent document search and knowledge retrieval. LLM Integration & Optimization Integrate with LLM platforms (OpenAI, Anthropic) or self-hosted models (Llama, Mistral), including prompt engineering, fine-tuning, and model evaluation. Experience with AI orchestration tools (LangFlow, Flowise), multimodal models, or AI safety and evaluation frameworks. AI Infrastructure & Observability Develop scalable AI pipelines with proper monitoring, evaluation metrics, and observability to ensure reliability in production environments. End-to-End Integration & Rapid Prototyping Connect AI backend to user-facing applications; prototype new AI features quickly using frontend frameworks (React/Next.js). Cross-Functional Collaboration Partner with product managers, designers, and fellow engineers to translate complex business requirements into robust AI solutions. Requirements Have 3–5 years of experience building production AI/ML systems at a consulting or product-engineering firm. Possess deep understanding of transformer architectures, vector embeddings, and semantic search. Are hands-on with vector databases (Pinecone, Weaviate, Chroma) and RAG pipelines. Have integrated and optimized LLMs via APIs or local deployment. Are proficient in Python AI stacks (LangChain, LlamaIndex, Hugging Face). Have built backend services (FastAPI, Node.js, or Go) to power AI features. Understand AI UX patterns (chat interfaces, streaming responses, loading states, error handling). Can deploy and orchestrate AI systems on AWS, GCP, or Azure with containerization and orchestration tools. Bonus: Advanced React/Next.js skills for prototyping AI-driven UIs. Benefits Creative Freedom: A culture that empowers you to innovate and take bold product decisions for client projects. Comprehensive Healthcare: Extensive health coverage for you and your family. Tailored Growth Plans: Personalized professional development programs to help you achieve your career aspirations

Posted 1 week ago

Apply

0.0 - 3.0 years

12 - 16 Lacs

Thiruvananthapuram, Kerala

On-site

We are seeking a Senior Software Engineer – AI with 3+ years of hands-on AI/ML experience and a strong background in Generative AI and Agentic AI . This role is perfect for individuals who thrive in startup-like environments —fast-paced, product-focused, and innovation-driven. You’ll design and deploy intelligent, scalable AI systems leveraging cutting-edge technologies like LLMs and autonomous agents. Key Responsibilities Build and deploy AI-driven applications using Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems capable of multi-step reasoning and autonomous execution. Collaborate with product, design, and engineering teams to integrate AI capabilities into products . Write clean, scalable code and develop robust APIs and services for AI deployment. Manage end-to-end feature delivery from research and experimentation to deployment and monitoring. Stay updated on emerging AI frameworks, tools, and best practices; apply them in real-world products. Mentor junior team members and contribute to a high-performing engineering culture . Required Skills & Experience 3–6 years total software development experience (3+ years in AI/ML). Strong proficiency in Python with hands-on experience in PyTorch, TensorFlow, Transformers (Hugging Face) . Proven expertise with LLMs (GPT, Claude, Mistral) and Generative AI (text, image, or audio). Experience with Agentic AI frameworks (LangChain, AutoGPT, Semantic Kernel). Practical experience deploying ML models to production environments . Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts . Understanding of API development, version control (Git), DevOps/MLOps practices . Ability to work in fast-paced, product-driven environments with adaptability and ownership. Why Join Us Work on cutting-edge Generative AI and Agentic AI systems . Fast-paced, innovation-focused work culture with high impact. Growth opportunities in a rapidly evolving AI ecosystem . Apply today to help shape the next generation of AI-powered products Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,600,000.00 per year Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

General Summary: The Senior AI Engineer (2–5 years' experience) is responsible for designing and implementing intelligent, scalable AI solutions with a focus on Retrieval-Augmented Generation (RAG), Agentic AI, and Modular Cognitive Processes (MCP). This role is ideal for individuals who are passionate about the latest AI advancements and eager to apply them in real-world applications. The engineer will collaborate with cross-functional teams to deliver high-quality, production-ready AI systems aligned with business goals and technical standards Essential Duties & Responsibilities: Design, develop, and deploy AI-driven applications using RAG and Agentic AI frameworks. Build and maintain scalable data pipelines and services to support AI workflows. Implement RESTful APIs using Python frameworks (e.g., FastAPI, Flask) for AI model integration. Collaborate with product and engineering teams to translate business needs into AI solutions. Debug and optimize AI systems across the stack to ensure performance and reliability. Stay current with emerging AI tools, libraries, and research, and integrate them into projects. Contribute to the development of internal AI standards, reusable components, and best practices. Apply MCP principles to design modular, intelligent agents capable of autonomous decision-making. Work with vector databases, embeddings, and LLMs (e.g., GPT-4, Claude, Mistral) for intelligent retrieval and reasoning. Participate in code reviews, testing, and validation of AI components using frameworks like pytest or unittest. Document technical designs, workflows, and research findings for internal knowledge sharing. Adapt quickly to evolving technologies and business requirements in a fast-paced environment. Knowledge, Skills, and/or Abilities Required: 2–5 years of experience in AI/ML engineering, with at least 2 years in RAG and Agentic AI. Strong Python programming skills with a solid foundation in OOP and software engineering principles. Hands-on experience with AI frameworks such as LangChain, LlamaIndex, Haystack, or Hugging Face. Familiarity with MCP (Modular Cognitive Processes) and their application in agent-based systems. Experience with REST API development and deployment. Proficiency in CI/CD tools and workflows (e.g., Git, Docker, Jenkins, Airflow). Exposure to cloud platforms (AWS, Azure, or GCP) and services like S3, SageMaker, or Vertex AI. Understanding of vector databases (e.g., OpenSearch, Pinecone, Weaviate) and embedding techniques. Strong problem-solving skills and ability to work independently or in a team. Interest in exploring and implementing cutting-edge AI tools and technologies. Experience with SQL/NoSQL databases and data manipulation. Ability to communicate technical concepts clearly to both technical and non-technical audiences. Educational/Vocational/Previous Experience Recommendations: Bachelor/ Master degree or related field. 2+ years of relevant experience Working Conditions: Hybrid - Pune Location

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

About The Job The Global telco CDS organization has an opportunity for an experienced Associate consultant who has exciting career opportunities at a time when RedHat is driving the way Telecom Operators & Service providers are transforming to the Cloud, CNF, NFV and SDN. You shall be engaged in Cloudband, Cloud Platforms and NFV (vIMS, vPPC, vEPC, …) delivery projects , spanning the Cloud portfolio as well 3th party Cloud infrastructures. As an expert Associate consultant you will participate in Cloud/NFV/SDN customer delivery projects, with focus on Openstack, Openshift and K8s cloud components, spanning High and Low Level Design , Test & Customer acceptance Design (Strategy, Test Cases) , and Customer lab integration & validation, delivery (install and commissioning in the field) and customer field acceptance. You shall perform solution-based install, configuration and acceptance of pre-architected solutions, either remote or on-site – to that end, some travel to customer sites will be required. Hence your willingness to travel abroad for short trips. You shall also provide remote or on-site expert support for on-site solution engineers during installation, acceptance and diagnosis activities and take care of integration issue handling during deployment. What Will You Do Participation in customer’s projects activities related to Cloud/NFV/SDN projects Understand our customers and product functional and technical requirements Understand how products integrate into our customer’s environment Understand customization requirements that the customer will need Assistance as consultant in customer’s presentations, documentation and workshops Participates in the High Level Design (HLD) approval cycle of the customer’s solution Responsible for Low Level Design (LLD), with detailed architecture and design of the solution Responsible for Test Design (Strategy, Test Cases and Reports) Producing Customer Documentation Participation in Deployment Industrialisation process to shorter lead time and reduce deployment efforts Preparation and Execution Proof of Concepts Upgrade and tests customer specific software/hardware before it is implemented to the customer’s environment Product and Application integration Perform solution based install, configuration and acceptance of pre-architected solution, either remote or on-site Remote expert support for on-site integration engineers during installation, acceptance and diagnosis activities Integration Issue Handling during Deployment Develop enough knowledge on our customer’s environment to participate in customer discussion and facilitate integration recommendation Some travel to customer Sites to conduct troubleshooting, and assist with the migration, particularly in Production Environments Manage customer technical requests and regularly update customer on progress What Will You Bring Expertise in NFV, SDN, and Cloud concepts linked with virtualization practical experience is needed Expertise in Cloud Management Systems, Virtual Infrastructure Management systems, and Cloud Infrastructure Management Systems is required. Technical and practical knowledge on OpenStack, OpenShift, and Kubernetes, RHEL is key Expertise in HEAT/HOT, Ansible, and Mistral concepts is required Familiar with cloud scripting (YAML,...) and cloud configuration management systems recipes, OpenStack (heat), chef (Ruby), puppet (Ruby),.. Knowledge of CBIS, NCS, OCP Expertise in Containers, Kubernetes, and Microservices concepts You are personally committed to quality You love working in a customer-facing environment You operate autonomously and are result-driven You take ownership and accountability You are flexible in taking up different roles and tackling multiple technologies, being a quick learner You have strong negotiation and communication skills You are capable of showing technical and organizational leadership Specific Additional Information You have a Telecommunications/Electronics/SW/Computer master’s degree or equivalent through experience You have multiple years of experience (at least 5 years) in e2e integration and validation of telecom solutions Very high English proficiency Up to ~25% travel (flexible) About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a skilled professional, your primary responsibility will involve designing and implementing cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to tackle specific business challenges. You will be tasked with creating conversational AI agents and chatbots that provide seamless, human-like interactions, tailored to meet client needs. Additionally, you will develop and optimize Retrieval-Augmented Generation (RAG) models to enhance AI's ability to retrieve and synthesize pertinent information for accurate responses. Your expertise will be leveraged in managing data lakes, data warehouses (including Snowflake), and utilizing Databricks for large-scale data storage and processing. You are expected to have a thorough understanding of Machine Learning Operations (MLOps) practices and manage the complete lifecycle of machine learning projects, from data preprocessing to model deployment. You will play a crucial role in conducting advanced data analysis to extract actionable insights and support data-driven strategies across the organization. Collaborating with stakeholders from various departments, you will align AI initiatives with business requirements to develop scalable solutions. Additionally, you will mentor junior data scientists and engineers, encouraging innovation, skill enhancement, and continuous learning within the team. Staying updated on the latest advancements in AI and deep learning, you will experiment with new techniques to enhance model performance and drive business value. Effectively communicating findings to both technical and non-technical audiences through reports, dashboards, and visualizations will be part of your responsibilities. Furthermore, you will utilize cloud platforms like AWS Bedrock to deploy and manage AI models at scale, ensuring optimal performance and reliability. Your technical skills should include hands-on experience with PyTorch, TensorFlow, and scikit-learn for deep learning and machine learning tasks. Proficiency in Python or R programming, along with knowledge of big data technologies like Hadoop and Spark, is essential. Familiarity with MLOps, data handling tools such as pandas and dask, and cloud computing platforms like AWS is required. Skills in LLAMAIndex and LangChain frameworks, as well as data visualization tools like Tableau and Power BI, are desirable. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Specialization in deep learning, significant experience with PyTorch and TensorFlow, and familiarity with reinforcement learning, NLP, and generative models are expected. In addition to challenging work, you will enjoy a friendly work environment, work-life balance, company-sponsored medical insurance, a 5-day work week with flexible timings, frequent team outings, and yearly leave encashment. This exciting opportunity is based in Ahmedabad.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AIML Engineer - MBM Relevant Experience: 5+ Years Location: Bangalore/Pune Employment Type: Full-time Shape the Future with Generative AI Are you passionate about harnessing cutting-edge Generative AI to create transformative real-world applications? Do you dream of building autonomous systems that learn, adapt, and make decisions independently? At Maersk, we’re reimagining how businesses solve complex problems with the power of Generative AI and Autonomous Agents. We’re looking for a AI/ML Engineer to join our dynamic team and play a pivotal role in building AI solutions that will define the future of intelligent systems. If you're excited about applying your expertise to projects that disrupt industries and drive measurable impact, we want to hear from you! This is an exciting opportunity for self-motivated engineers with technical expertise, creative problem-solving skills, and a talent for disruptive process transformation using AI/ML. What you'll do: Lead with autonomy: Take ownership of AI/ML projects from ideation to deployment, pushing boundaries of innovation. Design the future: Develop and fine-tune Generative AI models (LLMs, diffusion models, GANs, VAEs, etc.) to optimize SCP business processes and enhance productivity. Empower AI agents: Create agent AI architectures for autonomous decision-making, task delegation, and multi-agent collaboration using Agentic AI frameworks like AutoGPT. Innovate with LLM’s: Build & optimize LLM’s applications, leveraging RAG and build robust Machine Learning pipelines for NLP, Multimodal AI tasks. Work with cutting-edge tools: Harness the power of Vector Databases (e.g., Pinecone, FAISS, ChromaDB) and LLM APIs (OpenAI, Anthropic, Hugging Face, Mistral, Llama). Collaborate for impact: Partner with cross-functional teams to integrate AI solutions into real-world applications like chatbots, copilots, automation tools, etc.). Stay ahead: Perform continuous research on state-of-the-art AI methodologies, exploring advancements in Generative AI, Autonomous Agents, and NLP to drive innovation. Required Skills & Qualifications [Must have] Strong foundation in Machine Learning & Deep Learning with expertise in neural networks, optimization techniques and model evaluation. Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps. Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models. Nice-to-have Skills Experience with Prompt Engineering, Fine-tuning, and RAG techniques. Familiarity with Cloud Platforms (AWS, GCP, Azure) and deployment tools like Docker, Kubernetes, FastAPI, or Flask. Why Join Us: Innovate at scale: Work on groundbreaking Generative AI technologies that redefine what’s possible. Transform industries: Your work will directly impact how global organizations drive productivity and solve complex supply chain challenges. Collaborate with the best: Join a team of forward-thinking engineers, researchers, and product visionaries who are shaping the future of AI. Accelerate your growth: Enjoy opportunities to learn, grow, and lead in a fast-paced, innovation-driven environment. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Design, develop, and implement robust, scalable, and optimized machine learning and deep learning models, with the ability to iterate with speed Write and integrate automated tests alongside models or code to ensure reproducibility, scalability, and alignment with established quality standards Implement best practices in security, pipeline automation, and error handling using programming and data manipulation tools Identify and implement the right data-driven approaches to solve ambiguous and open-ended business problems, leveraging data engineering capabilities Research and implement new models, technologies, and methodologies and integrate these into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative tools, develop algorithms and optimized workflows Independently manage and optimize data solutions, perform A/B testing, evaluate performance and evaluate performance of systems Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Have mastered the concepts and can demonstrate Programming skills in complex scenarios. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Understands the basic fundamentals of Technical Documentation and can demonstrate in common scenarios with some guidance Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 5+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 4+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Strong foundation with expertise in neural networks, optimization techniques and model evaluation Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have… Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker) As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

AI Systems Architect / LLM-Native Generalist Location: South Mumbai, Maharashtra Type: Full-time ( Remote/in person) Compensation: Based on capability, experience, and potential Company Overview SBEK (Sab-Ek) is a jewelry brand built on service, myth, and light. Every piece we create carries ARKA’s Light, a mythical energy passed through the Crown Prince of Genoria, whose glowing eyes are now present in every pendant. Profile Overview We are building a new kind of company, one that integrates storytelling, AI systems, product design, and social impact. To support this vision, we’re looking for a highly capable and self-directed AI-native builder to help us design, test, and deploy advanced AI tools and infrastructure across our brand and operations. This role requires strong working knowledge of no-code and low-code platforms, modern LLM tooling, and the emerging AI agent ecosystem. You should be someone who actively experiments, learns quickly, and executes reliably. Roles and Responsibilities: Architect and maintain internal AI workflows across tools like Supabase, Lovable, Cursor, N8n, Airtable, and others Set up and manage authentication, database, and payment integrations (e.g. Stripe) Use and evaluate tools like Lang Chain, GPT-4, Claude, Mistral, Flowise, Vercel AI SDK, and other orchestration or RAG frameworks Work with the founder to prototype products and systems using a mix of no-code, automation, and API logic Stay consistently up to date with LLM tools, open-source projects, and AI-enabled platforms Create simple documentation or systems maps that help the rest of the team stay aligned Required Skills & Experience: Has working fluency with multiple AI-native tools, including but not limited to: Lovable, Cursor, N8n, Supa base, Zapier, Vercel, Firebase, Flowise, Make, Lang Chain Understands backend logic, user roles, API integrations, and real-time data flows Can deploy secure auth systems and subscription workflows without relying on plug-and-play templates Is independent, thoughtful, and able to turn open-ended ideas into working tools Understands prompt engineering, vector databases, and the fundamentals of model selection Is willing to test, fail, fix, and ship at speed Bonus Qualifications: You’ve built custom GPTs or personal AI assistants You’ve connected multiple tools or models in unique ways You’re interested in the intersection of AI and social good You’ve deployed apps or automations that are in use today You’re a strong communicator and able to explain your logic clearly This is a role for someone who already understands what’s happening in AI and wants to go deeper. Please Note: We are not looking to train someone from scratch. We are looking to work with someone who is already ahead and wants to build with meaning. If you're actively exploring AI tools, using modern platforms like Lovable and Cursor, and thinking about how they combine into products, ecosystems, or businesses

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About The Role We are seeking a highly motivated and creative Platform Engineer with a true research mindset. This is a unique opportunity to move beyond traditional development and step into a role where you will ideate, prototype, and build production-grade applications powered by Generative AI. You will be a core member of a platform team, responsible for developing both internal and customer-facing solutions that are not just functional but intelligent. If you are passionate about the MERN stack, Python, and the limitless possibilities of Large Language Models (LLMs), and you thrive on building things from the ground up, this role is for you. Core Responsibilities - Full-Stack Engineering : Write clean, scalable, and robust code across the MERN stack (MongoDB, Express.js, React, Node.js) and Python. AI-Powered Product Development : Create and enhance key products such as : Intelligent chatbots for customer service and internal support. Automated quality analysis and call auditing systems using LLMs for transcription and sentiment analysis. AI-driven internal portals and dashboards to surface insights and streamline workflows. Gen AI Integration & Optimization : Work hands-on with foundation LLMs, fine-tuning custom models, and implementing advanced prompting techniques (zero-shot, few-shot) to solve specific business problems. Research & Prototyping : Explore and implement cutting-edge AI techniques, including setting up systems for offline LLM inference to ensure privacy and performance. Collaboration : Partner closely with product managers, designers, and business stakeholders to transform ideas into tangible, high-impact technical solutions. Required Skills & Experience Experience : 2-5 years of professional experience in a software engineering role. Full-Stack Proficiency : Strong command of the MERN stack (MongoDB, Express.js, React, Node.js) for building modern web applications. Python Expertise : Solid programming skills in Python, especially for backend services and AI/ML workloads. Generative AI & LLM Experience (Must-Have) Demonstrable experience integrating with foundation LLMs (e.g., OpenAI API, Llama, Mistral, etc.). Hands-on experience building complex AI systems and implementing architectures such as Retrieval-Augmented Generation (RAG) to ground models with external knowledge. Practical experience with AI application frameworks like LangChain and LangGraph to create agentic, multi-step workflows. Deep understanding of prompt engineering techniques (zero-shot, few-shot prompting). Experience or strong theoretical understanding of fine-tuning custom models for specific domains. Familiarity with concepts or practical experience in deploying LLMs for offline inference. R&D Mindset : A natural curiosity and passion for learning, experimenting with new technologies, and solving problems in novel ways. Bonus Points (Nice-to-Haves) Cloud Knowledge : Hands-on experience with AWS services (e.g., EC2, S3, Lambda, SageMaker). (ref:hirist.tech)

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Location : Noida Job Role : Sr. AI/ML/ Data Scientist Overview We are seeking a highly skilled and experienced AI/ML Expert to spearhead the design, development, and deployment of advanced artificial intelligence and machine learning models. This role requires a strategic thinker with a deep understanding of ML algorithms, model optimization, and production-level AI systems. You will guide cross-functional teams, mentor junior data scientists, and help shape the AI roadmap to drive innovation and business impact. This role involves statistical analysis, data modeling, and interpreting large sets of data. An AI/ML expert with experience in creating AI models for data intelligence companies who specializes in developing and deploying artificial intelligence and machine learning solutions tailored for data-driven businesses. This expert should possess a strong background in data analysis, statistics, programming, and machine learning algorithms, enabling them to design innovative AI models that can extract valuable insights and patterns from vast amounts of data. About The Role The design and development of a cutting-edge application powered by large language models (LLMs). This tool will provide market analysis and generate high-quality, data-driven periodic insights. You will play a critical role in building a scalable and intelligent system that integrates structured data, NLP capabilities, and domain-specific knowledge to produce analyst-grade content. Key Responsibilities Design and develop LLM-based systems for automated market analysis. Build data pipelines to ingest, clean, and structure data from multiple sources (e.g., market feeds, news articles, technical reports, internal datasets). Fine-tune or prompt-engineer LLMs (e.g., GPT-4.5, Llama, Mistral) to generate concise, insightful reports. Collaborate closely with domain experts to integrate industry-specific context and validation into model outputs. Implement robust evaluation metrics and monitoring systems to ensure quality, relevance, and accuracy of generated insights. Develop and maintain APIs and/or user interfaces to enable analysts or clients to interact with the LLM system. Stay up to date with advancements in the GenAI ecosystem and recommend relevant improvements or integrations. Participate in code reviews, experimentation pipelines, and collaborative research Required : Strong fundamentals in machine learning, deep learning, and natural language processing (NLP). Proficiency in Python, with hands-on experience using libraries such as NumPy, Pandas, and Matplotlib/Seaborn for data analysis and visualization. Experience developing applications using LLMs (both closedand open-source models). Familiarity with frameworks like Hugging Face Transformers, LangChain, LlamaIndex, etc. Experience building ML models (e.g., Random Forest, XGBoost, LightGBM, SVMs), along with familiarity in training and validating models. Practical understanding of deep learning frameworks: TensorFlow or PyTorch. Knowledge of prompt engineering, Retrieval-Augmented Generation (RAG), and LLM evaluation strategies. Experience working with REST APIs, data ingestion pipelines, and automation workflows. Strong analytical thinking, problem-solving skills, and the ability to convert complex technical work into business-relevant insights. Preferred Familiarity with the chemical or energy industry, or prior experience in market research/analyst workflows. Exposure to frameworks such as OpenAI Agentic SDK, CrewAI, AutoGen, SmolAgent, etc. Experience deploying ML/LLM solutions to production environments (Docker, CI/CD). Hands-on experience with vector databases such as FAISS, Weaviate, Pinecone, or ChromaDB. Experience with dashboarding tools and visualization libraries (e.g., Streamlit, Plotly, Dash, or Tableau). Exposure to cloud platforms (AWS, GCP, or Azure), including usage of GPU instances and model hosting services. About ChemAnalyst ChemAnalyst is a digital platform, which keeps a real-time eye on the chemicals and petrochemicals market fluctuations, thus, enabling its customers to make wise business decisions. With over 450 chemical products traded globally, we bring detailed market information and pricing data at your fingertips. Our real-time pricing and commentary updates enable users to stay acquainted with new commercial opportunities. Each day, we flash the major happenings around the globe in our news section. Our market analysis section takes it a step further, offering an in-depth evaluation of over 15 parameters including capacity, production, supply, demand gap, company share and among others. Our team of experts analyse the factors influencing the market and forecast the market data for up to the next 10 years. We are a trusted source of information for our international clients, ensuring user-friendly and customized deliveries on time. (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

We are looking for a backend developer with hands-on experience integrating open-source LLMs (e.g., Mistral, LLaMA) and vector databases (like FAISS, Qdrant, Weaviate, Milvus) in offline or edge environments. You will help develop core APIs and infrastructure to support embedding pipelines, retrieval-augmented generation (RAG), and LLM inference on local or mobile hardware. Selected Intern's Day-to-day Responsibilities Include Design and implement backend services that interface with local LLMs and vector databases Develop APIs to support prompt engineering, retrieval, and inference Integrate embedding models (e.g., BGE, MiniLM) and manage chunking/processing pipelines Optimize performance for offline or constrained environments (mobile, embedded devices, etc.) Package and deploy models via frameworks like Ollama, llama.cpp, gguf, or on-device runtimes Handle local file I/O, document ingestion, metadata tagging, and indexing About Company: Infoware is a process-driven software solutions provider specializing in bespoke software solutions. We work with several enterprises and startups and provide them with end-to-end solutions. You may visit the company website at https://www.infowareindia.com/

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We are looking for a backend developer with hands-on experience integrating open-source LLMs (e.g., Mistral, LLaMA) and vector databases (like FAISS, Qdrant, Weaviate, Milvus) in offline or edge environments. You will help develop core APIs and infrastructure to support embedding pipelines, retrieval-augmented generation (RAG), and LLM inference on local or mobile hardware. Selected Intern's Day-to-day Responsibilities Include Design and implement backend services that interface with local LLMs and vector databases Develop APIs to support prompt engineering, retrieval, and inference Integrate embedding models (e.g., BGE, MiniLM) and manage chunking/processing pipelines Optimize performance for offline or constrained environments (mobile, embedded devices, etc.) Package and deploy models via frameworks like Ollama, llama.cpp, gguf, or on-device runtimes Handle local file I/O, document ingestion, metadata tagging, and indexing About Company: Infoware is a process-driven software solutions provider specializing in bespoke software solutions. We work with several enterprises and startups and provide them with end-to-end solutions. You may visit the company website at https://www.infowareindia.com/

Posted 1 week ago

Apply

7.5 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure OpenAI Service Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using Azure AI Services, Microsoft Azure OpenAI Service and Azure PaaS components. Your typical day will involve collaborating with cross-functional teams, managing project timelines, and ensuring the successful delivery of high-quality applications. Roles & Responsibilities: - Lead the design, development, and deployment of applications using Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search), Azure Language Services (CLU), Microsoft Bot Framework/ Microsoft Power Virtual Agents (Microsoft Copilot Studio) and Azure PaaS. - Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure project timelines are met. - Ensure the successful delivery of high-quality applications, adhering to best practices and industry standards. - Provide technical guidance and mentorship to team members, fostering a culture of continuous learning and growth. - Stay updated with the latest advancements in Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search) and Azure Language Services (CLU) integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills: Strong experience with Microsoft Azure OpenAI Service, Azure AI Search (Cognitive Search), Azure Language Services (CLU) and Azure PaaS including API Management, Key Vault and Application Insights. - Must To Have Skills: At least 1+ year hands on experience in Azure based LLMs with knowledge in embeddings/vectorization and semantic/vector based indexing and search. Strong experience in LLM frameworks such as Langchain and Semantic Kernel. -Must to Have Skills: Experience with conversational AI with Microsoft Bot Framework or Power Virtual Agents (Microsoft Copilot Studio). Should have experience in integrating chatbot solutions with multiple channels. - Must To Have Skills: Strong understanding of software development best practices including Agile methodologies and industry standards with special focus on API, function app and logic app building. Experience with Python programming language, using Visual Studio Code IDE and Azure AI Studio. - Good To Have Skills: Knowledge of other vector databases, Knowledge Graphs and other programming languages such as C#. Experience with different LLMs such as Llama, Mistral etc. - Good To Have Skills: Experience with deployment of applications and containerization technologies such as Docker or Kubernetes. Good To Have Skills: Experience with all Azure AI services like Video Indexer, Computer Vision, Custom Vision, Document Intelligence. - Experience with database technologies such as SQL Server and Cosmos DB. - Solid grasp of software testing and quality assurance principles. - Knowledge of DevOps practices. Additional Information: - The candidate should have a minimum of years of experience in software development. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. - This position is based at our Bengaluru office.

Posted 1 week ago

Apply

10.0 - 20.0 years

20 - 35 Lacs

Bengaluru

Work from Office

We are looking for a Technical Architect who has experience designing software solutions from the ground up, making high-level decisions about each stage of the process and leading a team of engineers to create the final product. To be successful as a Technical Architect, you should be an expert problem solver with a strong understanding of the broad range of software technologies and platforms available. Top candidates will also be excellent leaders and communicators. Responsibilities Collaborate with stakeholders (Product Owners, Engineers, Clients) to define and refine software architecture aligned with product vision. Design scalable, modular, and secure system architecture using microservices and modern design patterns. Create high-level product specifications, architecture diagrams, and technical documentation. Provide architectural blueprints, guidance, and technical leadership to development teams. Build and integrate AI-driven components, including Agentic AI (AutoGen, LangGraph), RAG pipelines, and LLM capabilities (Open-source and Commercial models). Define vector-based data pipelines and integrate vector databases such as FAISS, ChromaDB, or Azure AI Search. Lead the implementation of Prompt Engineering strategies to optimize LLM outcomes. Architect and deploy scalable, secure solutions on Azure Cloud, using services like Azure Kubernetes Service (AKS), Azure Functions, Azure Storage, and Azure DevOps. Ensure robust CI/CD and DevSecOps practices in place to streamline deployments and enforce compliance. Guide the team in troubleshooting, code reviews, performance tuning, and ensuring adherence to software engineering best practices. Conduct regular technical reviews, present progress updates, and ensure timely delivery of milestones. Drive innovation and exploration of emerging technologies, particularly in the field of Generative AI and Intelligent Automation. Requirements 10+ years of hands-on software development experience, with a strong Python background. 3+ years of experience in a Software or Solution Architect role. Proven experience in designing and building production-grade solutions using microservices architecture. Strong expertise in Agentic AI frameworks (AutoGen, LangGraph, CrewAI, LangChain, etc.). Working knowledge of LLMs (Open-source like LLaMA, Mistral; Commercial like OpenAI, Azure OpenAI). Solid experience with Prompt Engineering, RAG pipelines, and Vector Database integration. Experience deploying AI/ML solutions on Azure Cloud, leveraging PaaS components and cloud-native patterns. Proficiency with containerization and orchestration tools (Docker, Kubernetes). Knowledge of architectural patterns: event-driven, domain-driven design, CQRS, service mesh, etc. Familiarity with NLP tools, frameworks, and libraries (spaCy, Hugging Face, Transformers, etc.). Experience with Apache Airflow or other workflow orchestration tools. Strong communication and leadership skills to work cross-functionally with technical and non-technical teams.

Posted 1 week ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Job Description: The AI/ML engineer role requires a blend of expertise in machine learning operations (MLOps), ML Engineering, Data Science, Large Language Models (LLMs), and software engineering principles. Skills you'll need to bring: Experience building production-quality ML and AI systems. Experience in MLOps and real-time ML and LLM model deployment and evaluation. Experience with RAG frameworks and Agentic workflows valuable. Proven experience deploying and monitoring large language models (e.g., Llama, Mistral, etc.). Improve evaluation accuracy and relevancy using creative, cutting-edge techniques from both industry and new research Solid understanding of real-time data processing and monitoring tools for model drift and data validation. Knowledge of observability best practices specific to LLM outputs, including semantic similarity, compliance, and output quality. Strong programming skills in Python and familiarity with API-based model serving. Experience with LLM management and optimization platforms (e.g., LangChain, Hugging Face). Familiarity with data engineering pipelines for real-time input-output logging and analysis. Qualifications: Experience working with common AI-related models, frameworks and toolsets like LLMs, Vector Databases, NLP, prompt engineering and agent architectures. Experience in building AI and ML solutions. Strong software engineering skills for the rapid and accurate development of AI models and systems. Prominent in programming language like Python. Hands-on experience with technologies like Databricks, and Delta Tables. Broad understanding of data engineering (SQL, NoSQL, Big Data), Agile, UX, Cloud, software architecture, and ModelOps/MLOps. Experience in CI/CD and testing, with experience building container-based stand-alone applications using tools like GitHub, Jenkins, Docker and Kubernetes Responsibilities: Participate in research and innovation of data science projects that have impact to our products and customers globally. Apply ML expertise to train models, validates the accuracy of the models, and deploys the models at scale to production. Apply best practices in MLOps, LLMOps, Data Science, and software engineering to ensure the delivery of clean, efficient, and reliable code. Aggregate huge amounts of data from disparate sources to discover patterns and features necessary to automate the analytical models. About Company Improva is a global IT solution provider and outsourcing company with contributions across several domains including FinTech, Healthcare, Insurance, Airline, Ecommerce & Retail, Logistics, Education, Insurance, Startups, Government & Semi-Government, and more.

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities Advanced Model Development: Design and implement cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to address specific business challenges. AI Agent and Chatbot Development: Create conversational AI agents capable of delivering seamless, human-like interactions, from foundational models to fine-tuning chatbots tailored to client needs. Retrieval-Augmented Generation (RAG): Develop and optimize RAG models, enhancing AI’s ability to retrieve and synthesize relevant information for accurate responses. Framework Expertise: Leverage LLAMAIndex and LangChain frameworks for building agent-driven applications that interact with large language models (LLMs). Data Infrastructure: Expertise in managing and utilizing data lakes, data warehouses (including Snowflake), and Databricks for large-scale data storage and processing. Machine Learning Operations (MLOps): Manage the full lifecycle of machine learning projects, from data preprocessing and feature engineering through model training, evaluation, and deployment, with a solid understanding of MLOps practices. Data Analysis & Insights: Conduct advanced data analysis to uncover actionable insights and support data-driven strategies across the organization. Cross-Functional Collaboration: Partner with cross-departmental stakeholders to align AI initiatives with business needs, developing scalable AI-driven solutions. Mentorship & Leadership: Guide junior data scientists and engineers, fostering innovation, skill growth, and continuous learning within the team. Research & Innovation: Stay at the forefront of AI and deep learning advancements, experimenting with new techniques to improve model performance and enhance business value. Reporting & Visualization: Develop and present reports, dashboards, and visualizations to effectively communicate findings to both technical and non-technical audiences. Cloud-Based AI Deployment: Utilize AWS Bedrock, including tools like Mistral and Anthropic Claude, to deploy and manage AI models at scale, ensuring optimal performance and reliability. Web Framework Integration: Build and deploy AI-powered applications using web frameworks such as Django and Flask, enabling seamless API integration and scalable backend services. Technical Skills Deep Learning & Machine Learning: Extensive hands-on experience with PyTorch, TensorFlow, and scikit-learn, along with large-scale data processing. Programming & Data Engineering: Strong programming skills in Python or R, with knowledge of big data technologies such as Hadoop, Spark, and advanced SQL. Data Infrastructure: Proficiency in managing and utilising data lakes, data warehouses, and Databricks for large-scale data processing and storage. MLOps & Data Handling: Familiar with MLOps and experienced in data handling tools like pandas and dask for efficient data manipulation. Cloud Computing: Advanced understanding of cloud platforms, especially AWS, for scalable AI/ML model deployment. AWS Bedrock: Expertise in deploying models on AWS Bedrock, with tools such as Mistral and Anthropic Claude. AI Frameworks: Skilled in LLAMAIndex and LangChain, with practical experience in agent-based applications. Data Visualization: Proficient in visualization tools like Tableau, Power BI for clear data presentation. Analytical & Communication Skills: Strong problem-solving abilities with the capability to convey complex technical concepts to diverse audiences. Team Collaboration & Leadership: Proven success in collaborative team environments, with experience in mentorship and leading innovative data science projects. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Experience: Specializing in deep learning, including extensive experience in PyTorch and TensorFlow. Advanced AI Knowledge: Familiarity with reinforcement learning, NLP, and generative models. Benefits: Friendly Work Environment Work-Life Balance Company-Sponsored Medical Insurance 5-Day Work Week with Flexible Timings Frequent Team Outings Yearly Leave Encashment Location: Ahmedabad

Posted 1 week ago

Apply

7.0 - 12.0 years

8 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Key Responsibilities: Lead a cross-functional engineering team building AI-powered call center solutions. Oversee the deployment and tuning of STT, TTS, and LLM models. Translate expert agent feedback into model tuning strategies. Manage prompt engineering, RAG/CAG integration, and action workflows. Customize models rapidly for new enterprise clients. Ensure the team can: Understand speech-to-text and text-to-speech models Analyze feedback from expert call agents Develop and refine prompts for LLM tuning Create reliable, scalable workhorse models Optimize model operations to reduce cost Integrate LLMs with Retrieval-Augmented Generation (RAG) and Context-Augmented Generation (CAG) systems Build custom action workflows linked with enterprise data systems Enable rapid customization for customer-specific use cases and data environments Requirements: Strong background in NLP and LLMs (GPT, Mistral, Claude, etc.). Experience in leading AI/ML teams or mentoring. Exposure to distributed teams across time zones. Familiar with cloud platforms (AWS/GCP), vector DBs, LangChain, etc. Deep understanding of NLP, speech technologies, and generative AI stacks Strong systems architecture and cloud deployment skills Experience leading technical teams, ideally in a fast-paced environment

Posted 1 week ago

Apply

2.0 years

12 - 28 Lacs

Coimbatore, Tamil Nadu, India

On-site

Experience: 3 to 10 Location : Coimbatore Notice Period: Immediate Joiners are Preferred. Note: Minimum 2 years experience into core Gen AI 𝗞𝗲𝘆 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀: Design, develop, and fine-tune Large Language Models (LLMs) for various in-house applications. Implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Build and manage data pipelines for processing, transforming, and feeding structured/unstructured data into AI models. Ensure scalability, performance, and security of AI-driven solutions in production environments. Collaborate with cross-functional teams, including data engineers, software developers, and product managers. Conduct experiments and evaluations to improve AI system accuracy and efficiency. Stay updated with the latest advancements in AI/ML research, open-source models, and industry best practices. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 & 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 Strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases (e.g., Pinecone, ChromaDB, Weaviate, OpenSearch, FAISS). Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience in Python web frameworks such as FastAPI, Django, or Flask. Experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes). Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications. Strong understanding of vector search, embedding models, and hybrid retrieval techniques. Experience with optimizing inference and serving AI models in real-time production systems. 𝗡𝗶𝗰𝗲-𝘁𝗼-𝗛𝗮𝘃𝗲 𝗦𝗸𝗶𝗹𝗹𝘀 Experience with multi-modal AI (text, image, audio). Familiarity with privacy-preserving AI techniques and responsible AI frameworks. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation. Skills: pytorch,rag architectures,opensearch,weaviate,docker,llm fine-tuning,chromadb,apache airflow,lora,python,hybrid retrieval techniques,django,gcp,crewai,opean ai,hugging face,gen ai,pinecone,faiss,aws,autogpt,embedding models,flask,fastapi,llm apis,deepspeed,vector search,peft,langchain,azure,spark,kubernetes,ai gen,tensorflow,real-time production systems,langgraph,kafka

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

About the Job As a Lead AI/ML Engineer, you spearhead the design, development, and implementation of advanced AI and machine learning models. Your role involves guiding a team of engineers ensuring the successful deployment of projects that leverage AI/ML technologies to solve complex problems. You collaborate closely with stakeholders to understand business needs, translate them into technical requirements, and drive innovation. Your responsibilities include optimizing model performance, conducting rigorous testing, and maintaining up-to-date knowledge of the latest industry trends. Additionally, you mentor team members, promote best practices, and contribute to strategic decision-making within the organization. Core Responsibilities Client Interaction: Discuss client requirements and develop proposals tailored to their needs. Demonstrations and Workshops: Conduct solution/product demonstrations, POC/Proof of Technology workshops, and prepare effort estimates in line with customer budgets and organizational financial guidelines. Model Oversight: Oversee the development and deployment of AI models, especially those generating content such as text, images, or other media. AI Solutions: Engage in coding, designing, developing, implementing, and deploying advanced AI solutions. Expertise Utilization: Utilize your expertise in NLP, Python programming, LLMs, Deep Learning, and AI principles to drive the development of transformative technologies. Leadership and Initiative: Actively lead projects and contribute to both unit-level and organizational initiatives to provide high-quality, value-adding solutions to customers. Strategic Development: Develop value-creating strategies and models to help clients innovate, drive growth, and increase profitability. Technology Awareness: Stay informed about the latest technologies and industry trends. Problem-Solving and Collaboration: Employ logical thinking and problem-solving skills, and collaborate effectively. Client Interfacing: Demonstrate strong client interfacing skills. Project and Team Management: Manage projects and teams efficiently. Required Skills Skills: Hands-on expertise in NLP, Computer Vision, programming, and related concepts. Leadership: Capable of leading and mentoring a team of AI engineers and researchers, setting strategies for AI model development and deployment, and ensuring these align with business goals. Technical Proficiency: Proficient in implementing and optimizing advanced AI solutions using Deep Learning and NLP, with tools such as TensorFlow, PyTorch, Spark, and Keras. LLM Experience: Experience with Large language Models like GPT 3.5, GPT 4, Llama, Gemini, Mistral, etc. along with experience in LLM integration frameworks like LangChain, Llamaindex, AgentGPT, etc. Deep Learning OCR: Extensive experience implementing solutions using Deep Learning OCR algorithms. Neural Networks: Working knowledge of Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs). Python Expertise: Strong coding skills in Python, including related frameworks, best practices, and design patterns. Preferred Knowledge: Familiarity with word embeddings, transformer models, and image/text generation and processing. Deployment: Experience deploying AI/ML solutions as a service or REST API endpoints on Cloud or Kubernetes. Development Methodologies: Proficient in development methodologies and writing unit tests in Python. Cloud: Knowledge of cloud computing platforms and services, such as AWS, Azure, or Google Cloud. Experience with information security and secure development best practices. Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, Statistics, Physics, or a related field. 8+ years in IT with a focus on AI/ML practices and background.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: AI/ML Engineer Location : Hyderabad Onsite Experience : 5+ years Role Overview We are hiring a Mid-Level AI/ML Engineer with 5+ years of experience in designing, developing, and deploying AI/ML solutions. The role involves integrating LLMs , Agentic AI , RAG pipelines , and anomaly detection into cloud/on-prem platforms, enabling natural language interfaces and intelligent automation. Candidates must be hands-on with Python ML stack, own MLOps lifecycle, and be capable of translating cybersecurity problems into scalable ML solutions. Key Responsibilities LLM & Chatbot Integration : Build conversational AI using LLMs with context-awareness, domain adaptation, and natural language interaction. Retrieval-Augmented Generation (RAG) : Implement vector search with semantic retrieval to ground LLM responses with internal data. Agentic AI : Create autonomous agents to execute multi-step actions using APIs, tools, or reasoning chains for automated workflows. Anomaly Detection & UEBA : Develop ML models for user behaviour analytics, threat detection, and alert tuning. NLP & Insights Generation : Transform user queries into actionable security insights, reports, and policy recommendations. MLOps Ownership : Manage end-to-end model lifecycle – training, validation, deployment, monitoring, versioning. Required Skills Strong Python experience with ML frameworks: TensorFlow, PyTorch, scikit-learn . Hands-on with LLMs (OpenAI, HuggingFace, etc.), prompt engineering, fine-tuning, and inference optimization. Experience implementing RAG using FAISS , Pinecone , or similar. Familiarity with LangChain , agentic frameworks , and multi-agent orchestration. Solid understanding of MLOps : Docker, CI/CD, deployment on cloud/on-prem infra. Security-conscious development practices and ability to work with structured/unstructured security data. Preferred Bachelor’s degree in computer science and more preferably Data Science or AI related fields. Experience with cybersecurity use cases: CVE analysis, behaviour analytics, compliance, log processing. Knowledge of open-source LLMs (LLaMA, Mistral, etc.) and cost-efficient deployment methods. Background in chatbots, Rasa, or custom NLP-driven assistants. Exposure to agent tools (LangChain Agents, AutoGPT-style flows) and plugin integration.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We create intelligent, user-first digital products that redefine industries through the power of AI and engineering excellence. We are looking for a Senior Software Engineer-AI with 4-6 years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment, fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. As a Senior Software Engineer-AI at Techvantage.ai, you will be responsible for building and deploying AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). You will design and implement Agentic AI systemsautonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-endfrom research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. In order to excel in this role, we are seeking candidates with 3-6 years of overall software development experience, with at least 3 years specifically in AI/ML engineering. Strong proficiency in Python is required, along with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio) is highly desirable. Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel) is a plus. Experience building and deploying ML models to production environments is essential. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts is beneficial. Comfortable working in a startup-like environment is crucialself-motivated, adaptable, and willing to take ownership. A solid understanding of API development, version control, and modern DevOps/MLOps practices is expected from the ideal candidate.,

Posted 1 week ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary Job Description: Generative AI Engineer LLM Applications & Agentic Frameworks Design and implement end-to-end LLM applications using OpenAI / Claude / Mistral /Gemini, or LLaMA on AWS, Databricks, Azure or GCP. Build intelligent, autonomous agents using LangGraph / AutoGen and LlamaIndex, Crew.ai, or custom frameworks. Develop Multi Model, Multi Agent, Retrieval-Augmented Generation (RAG) applications with secure context embedding and tracing with reports. Rapidly explore and showcase the art of the possible through functional, demonstrable POCs Advanced AI Experimentation Fine-tune LLMs and Small Language Models (SLMs) for domain-specific use. Create and leverage synthetic datasets to simulate edge cases and scale training. Evaluate agents using custom agent evaluation frameworks (success rates, latency, reliability) Evaluate emerging agent communication standards — A2A (Agent-to-Agent) and MCP (Model Context Protocol) Business Alignment & Cross-Team Collaboration Translate ambiguous requirements into structured, AI-enabled solutions. Clearly communicate and present ideas, outcomes, and system behaviors to technical and non-technical stakeholders Good-To-Have Microsoft Copilot Studio DevRev Codium Cursor Atlassian AI Databricks Mosaic AI Qualifications 5–9 years of experience in software development or AI/ML engineering At least 3 years working with LLMs, GenAI applications, or agentic frameworks. Proficient in AI/ML, MLOps concepts, Python, embeddings, prompt engineering, and model orchestration Proven track record of developing functional AI prototypes beyond notebooks. Strong presentation and storytelling skills to clearly convey GenAI concepts and value. Ability to independently drive AI experiments from ideation to working demo. Work Location : Bangalore (Mandatory to go to office daily) Experience : 5 to 9 years

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a hands-on Data Scientist with deep expertise in NLP and Generative AI to help build and refine the intelligence behind our agentic AI systems. You will be responsible for fine- tuning, prompt engineering, and evaluating LLMs that power our digital workers across real-world workflows. Years of Experience 3 - 6 Years Budget 18 LPA to 24 LPA Location Chennai Immediate to 30 days Key Responsibilities · Fine-tune and evaluate LLMs (e.g., Mistral, LLaMA, Qwen) using frameworks like Unsloth, HuggingFace, and DeepSpeed · Develop high-quality prompts and RAG pipelines for few-shot and zero-shot performance · Analyze and curate domain-specific text datasets for training and evaluation · Conduct performance and safety evaluation of fine-tuned models · Collaborate with engineering teams to integrate models into agentic workflows · Stay up to date with the latest in open-source LLMs and GenAI tools, and rapidly prototype experiments · Apply efficient training and inference techniques (LoRA, QLoRA, quantization, etc.) Requirements · 3+ years of experience in Natural Language Processing (NLP) and machine learning applied to text · Strong coding skills in python · Hands-on experience fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, Qwen) using frameworks like Unsloth, HuggingFace Transformers, PEFT, LoRA, QLoRA, bitsandbytes · Proficient in PyTorch (preferred) or TensorFlow, with experience in writing custom training/evaluation loops · Experience in dataset preparation, tokenization (e.g., Tokenizer, tokenizers), and formatting for instruction tuning (ChatML, Alpaca, ShareGPT formats) · Familiarity with retrieval-augmented generation (RAG) using FAISS, Chroma, Weaviate, or Qdrant · Strong knowledge of prompt engineering, few-shot/zero-shot learning, chain-of-thought prompting, and function-calling patterns · Exposure to agentic AI frameworks like CrewAI, Phidata, LangChain, LlamaIndex, or AutoGen · Experience with GPU-accelerated training/inference and libraries like DeepSpeed, Accelerate, Flash Attention, Transformers v2, etc. · Solid understanding of LLM evaluation metrics (BLEU, ROUGE, perplexity, pass@k) and safety- related metrics (toxicity, bias) · Ability to work with open-source checkpoints and formats (e.g., safetensors, GGUF, HF Hub, GPTQ, ExLlama) · Comfortable with containerized environments (Docker) and scripting for training pipelines, data curation, or evaluation workflows Nice to Haves · Experience in Linux (Ubuntu) · Terminal/Bash Scripting

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies