Home
Jobs

285 Mistral Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY- Assurance – Senior - Digital Position Details As part of EY GDS Assurance Digital, you will be responsible for implementing innovative ideas through AI research to develop high growth & impactful products. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. You will work with multi-disciplinary teams across the entire region to support global clients. This is a core full-time AI developer role, responsible for creating innovative solutions by applying AI based techniques for business problems. As our in-house senior AI engineer, your expertise and skills will be vital in our ability to steer one of our Innovation agenda. Responsibilities Convert business problem into analytical problem and devise a solution approach. Clean, aggregate, analyze and interpret the data to derive business insights from it. Own the AI/ML implementation process: Model Design, Feature Planning, Testing, Production Setup, Monitoring, and release management. Work closely with the Solution Architects in deployment of the AI POC’s and scaling up to production level applications. Should have solid background in Python and has deployed on open-source models- Work on data extraction techniques from complex PDF/Word Doc/Forms- entities extraction, table extraction, information comparison. Key Requirements/Skills & Qualification: Excellent academic background, including at a minimum a bachelor or a master’s degree in data science, Business Analytics, Statistics, Engineering, Operational Research, or other related field with strong focus on modern data architectures, processes, and environments. Solid background in Python with excellent coding skills. 4+ years of core data science experience in one or more below areas: Machine Learning (Regression, Classification, Decision Trees, Random Forests, Timeseries Forecasting and Clustering) Understanding and usage of Large Language Models like Open AI models like ChatGPT, GPT4, frameworks like LangChain and Llama Index. Good understanding of open source LLM framework like Mistral, Llama, etc. and fine tuning on custom datasets. Deep Learning (DNN, RNN, LSTM, Encoder-Decoder Models) Natural Language Processing- Text Summarization, Aspect Mining, Question Answering, Text Classification, NER, Language Translation, NLG, Sentiment Analysis, Sentence Computer Vision- Image Classification, Object Detection, Tracking etc. SQL/NoSQL Databases and its manipulation components Working knowledge of API Deployment (Flask/FastAPI/Azure Function Apps) and webapps creation, Docker, Kubernetes. Additional skills requirements: Excellent written, oral, presentation and facilitation skills Ability to coordinate multiple projects and initiatives simultaneously through effective prioritization, organization, flexibility, and self-discipline. Must have demonstrated project management experience. Knowledge of firm’s reporting tools and processes. Proactive, organized, and self-sufficient with ability to priorities and multitask. Analyses complex or unusual problems and can deliver insightful and pragmatic solutions. Ability to quickly and easily create/ gather/ analyze data from a variety of sources. A robust and resilient disposition able to encourage discipline in team behaviors What We Look For A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

2.0 - 6.0 years

6 - 16 Lacs

Chennai

Work from Office

Naukri logo

We are seeking a highly skilled and motivated AI/ML Engineer with 2-5 years of experience in the software industry. The ideal candidate will have a strong foundation in machine learning, deep learning, and large language models, along with hands-on expertise in modern ML tools and frameworks. Design, develop, and deploy machine learning and deep learning models to solve real-world problems. Apply expert-level proficiency in Python to build and optimize scalable AI solutions. Work extensively with large language models (LLMs) such as LLaMA, Zephyr, Mistral, and OpenAI APIs. Implement and fine-tune transformer-based models, including BERT, GPT, RoBERTa, and related architectures. Leverage ML frameworks and libraries like PyTorch, Langchain, and Spacy in model development. Build and maintain robust MLOps infrastructure to support the complete ML lifecycle. Collaborate with cross-functional teams to integrate AI/ML capabilities into production systems. Demonstrate excellent problem-solving and critical thinking abilities to tackle complex AI challenges.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title – Ind & Func AI Value Strategy Consultant S&C GN Management Level: 09 - Consultant Location: All locations Must have skills: ML algorithms, NLP and Predictive Analytics, Conversational Analytics, Python, Cloud Knowledge (Azure/AWS/GCP), Proficiency in BI tools (Power BI/Tableau/ Qlik Sense/ThoughtSpot/Figma etc.). Leverages artificial intelligence and machine learning techniques to enhance BI processes along with consulting experience of AI driven BI Transformations (Defining BI Governance, Operating model, Assessment Framework etc.) Good to have skills: Knowledge of Gen AI Models implementation (Open AI, Mistral, Phi-2, Solar, Claude 3 etc.), Agentic architecture implementation About S&C - Global Network : - Accenture Strategy & Consulting Global shapes our clients’ future, combining deep business insight with the understanding of how technology will change industry and business models. Our focus on issues related to business intelligence, Analytics and data insights using AI & GEN AI strategy and solutions. Today, digital is changing the way organizations engage with their employees, business partners, customers, and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our S&C Global Network – a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our S&C Global Network teams complement our in-country teams to deliver innovative expertise and measurable value to clients all around the world. WHAT’S IN IT FOR YOU? You’ll be part of a diverse, vibrant, BI Strategy team which has dynamic and innovative group of creating Data and BI Strategy & Consulting professionals. We understand the vision and mission of our customers & help them by developing innovative BI Strategy with value driven use cases, roadmap & technological solutions that drive business growth and innovation. As a key pillar of our organization, the Engineering Products team worked on various fields from BI , Data & AI perspective- BI Strategy, AI Enabled BI Strategy, Data Modelling, Data Architecture, Industry & AI Value strategy etc. that helps our customers in setting up strong data platform foundation & target roadmap to scaling & evolve towards achieving AI/GEN AI & advanced analytics vision to meet the evolving future needs of technology advancement. What You Would Do In This Role Participate in visioning workshop & develop innovative BI Strategy & architecture frameworks tailored to the unique goals and aspirations of our clients, ensuring alignment with their evolving needs and preferences. Conduct comprehensive evaluations of clients' existing processes, technologies, and data ecosystems, uncovering opportunities for AI integration that resonate with Gen AI values and lifestyles. Align with industry & function teams, understand business problems, and translate it to BI , BI Governance , AI enabled BI and Operating Model , Develop target strategy with respect to Business Intelligence, Tools & Technology, BI Operating Model, BI Governance, AI Enabled BI needs along with Key initiatives & roadmap. Propose best suited LLMs (Large Language Model-GPT 3.5, 4, Llama etc.) as per the selection criteria framework & serve as a trusted advisor to our clients, offering expert guidance on AI Enabled BI Propose adoption strategies that resonate with BI and plan persona-based BI strategy for enterprise function. Work closely with cross-functional teams to co-create and implement innovative . He should be able to define personas and persona journey from functional and technical aspects with respect to Business Intelligence Able to architect , delivery and design BI Enabled AI solutions that address the complex challenges faced by customers and businesses. Participate in client engagements with confidence and finesse, overseeing project scope, timelines, and deliverables to ensure seamless execution and maximum impact. Facilitate engaging workshops and training sessions to empower BI Client stakeholders with the knowledge and skills needed to embrace Data driven transformation enabled through AI. Stay abreast of emerging BI , Data and AI analytics trends and technologies, continuously improving internal capabilities and offerings. Participate in BI, AI Enabled BI and Analytics aligned solutions and client demos. Who are we looking for? Years of Experience: Candidates should typically have at least 6-9 years of experience in BI strategy, management, or a related field. This experience should include hands-on involvement in developing and implementing BI strategies for clients across various industries. Education: A bachelor’s or master’s degree in computer science, Data Science, Engineering, Statistics, or a related field. Add eligibility: Strong analytical and strategic thinking skills are essential for this role. Candidates should be able to assess clients' business processes, technologies, BI, AI and data infrastructure to find opportunities for BI and AI Enabled BI integration and develop tailored BI strategy frameworks aligned with clients' goals. Displayed expertise in artificial intelligence technologies, including machine learning, natural language processing, computer vision, and deep learning. Candidates should have a solid understanding of AI concepts, algorithms, and their applications in solving real-world business problems. Excellent communication and presentation skills are crucial for effectively articulating complex technical concepts to non-technical stakeholders. Candidates should be able to convey AI strategy recommendations in a clear, concise, and compelling manner. Knowledge of cloud computing services (AWS, Azure, GCP) related to scaled AI and data analytics solutions. The ability to collaborate effectively with cross-functional teams is essential for this role. Candidates should be comfortable working with diverse teams to design and execute AI solutions that address clients' business challenges and deliver measurable results. Candidates should have a strong understanding of Gen AI preferences, behaviors, and values. Candidate should have understanding & working knowledge of various large language models to propose and implement best suited LLMs to our customer based on AI strategies that resonate with Gen AI Working experience with machine learning algorithms and statistical modeling techniques. Knowledge on MLOPs tools and services from strategic mindset will be a plus. Desired experience. Minimum 6 years of experience working with clients in the products industry (Lifesciences, CPG, Industry & Ret that are heavily influenced by AI & Gen AI preferences and behaviors, is highly valued. Candidates who have a deep understanding of AI & Gen AI trends and market dynamics can provide valuable insights and strategic guidance to clients. Minimum 5 years proven experience & deep expertise in developing and implementing AI strategy frameworks tailored to the specific needs and aims of clients within LS, Industry, CPG, and Retail sectors. The ability to craft innovative AI solutions that address industry-specific challenges and drive tangible business outcomes will set you apart. Minimum 6 years strong consulting background with a demonstrated ability to lead client engagements from start to completion. Consulting experience should encompass stakeholder management and effective communication to ensure successful project delivery. Minimum 4 years strong working experience AI implementation lifecycle & working knowledge in the AI and Analytics lifecycle, including problem definition, data preparation, model building, validation, deployment, and monitoring. Tools & Techniques Knowledge of UI/UX design using React, Angular & Visualization tools-Power BI/Tableau/Qlik Sense Knowledge on Cloud and native services around AI & GEN AI implementation-AWS/Azure/ GCP Have worked hands on Programming languages such as Python, R, SQL or Scala. Working knowledge AI and analytics platforms and tools, such as TensorFlow, PyTorch, KNIME, or similar technologies Working Knowledge on NLP (Natural Language Processing) techniques-text summarizer, text classification, Named Entity Recognition etc. Working Knowledge on various machine learning models-Supervised unsupervised, classification, regression model, Reinforcement learning, Neural networks etc., Knowledge on LLMs-Open AIGPT, Llama Index etc. Knowledge of ML Ops deployment, process tools & strategy (ML Flow, AWS Sage maker etc. Accenture is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law. Experience: Minimum 4 year(s) of experience is required Educational Qualification: B.Tech/BE Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Experience: 4-6 Years Key Responsibilities ● Fine-tune and train open-source LLMs (e.g., LLaMA or similar) for downstream applications. ● Build and orchestrate multi-agent workflows using LangGraph for production use cases. ● Implement and optimize RAG pipelines, including embedding stores and retrievers. ● Deploy and manage models via Hugging Face with robust inference capabilities. ● Develop modular backend components and APIs using Python. ● Ensure reproducibility, efficiency, and scalability in all LLM training and deployment tasks. ● Independently build and deliver project components from the ground up. Must-Have Skills ● 4–6 years of hands-on experience in AI/ML engineering roles. ● Strong experience with LangGraph in real-world, multi-agent applications. ● Production-level experience in LLM fine-tuning and deployment (not POCs or academic work). ● Deep understanding of RAG pipeline design and implementation. ● Proficiency in Python for data pipelines and model orchestration. ● Familiarity with open-source LLMs like LLaMA, Mistral, Falcon, etc. ● Deployment experience with the Hugging Face ecosystem. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY- Assurance – Senior - Digital Position Details As part of EY GDS Assurance Digital, you will be responsible for implementing innovative ideas through AI research to develop high growth & impactful products. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. You will work with multi-disciplinary teams across the entire region to support global clients. This is a core full-time AI developer role, responsible for creating innovative solutions by applying AI based techniques for business problems. As our in-house senior AI engineer, your expertise and skills will be vital in our ability to steer one of our Innovation agenda. Responsibilities Requirements (including experience, skills, and additional qualifications) Convert business problem into analytical problem and devise a solution approach. Clean, aggregate, analyze and interpret the data to derive business insights from it. Own the AI/ML implementation process: Model Design, Feature Planning, Testing, Production Setup, Monitoring, and release management. Work closely with the Solution Architects in deployment of the AI POC’s and scaling up to production level applications. Should have solid background in Python and has deployed on open-source models- Work on data extraction techniques from complex PDF/Word Doc/Forms- entities extraction, table extraction, information comparison. Key Requirements/Skills & Qualification: Excellent academic background, including at a minimum a bachelor or a master’s degree in data science, Business Analytics, Statistics, Engineering, Operational Research, or other related field with strong focus on modern data architectures, processes, and environments. Solid background in Python with excellent coding skills. 4+ years of core data science experience in one or more below areas: Machine Learning (Regression, Classification, Decision Trees, Random Forests, Timeseries Forecasting and Clustering) Understanding and usage of Large Language Models like Open AI models like ChatGPT, GPT4, frameworks like LangChain and Llama Index. Good understanding of open source LLM framework like Mistral, Llama, etc and fine tuning on custom datasets. Deep Learning (DNN, RNN, LSTM, Encoder-Decoder Models) Natural Language Processing- Text Summarization, Aspect Mining, Question Answering, Text Classification, NER, Language Translation, NLG, Sentiment Analysis, Sentence Computer Vision- Image Classification, Object Detection, Tracking etc SQL/NoSQL Databases and its manipulation components Working knowledge of API Deployment (Flask/FastAPI/Azure Function Apps) and webapps creation, Docker, Kubernetes. Additional skills requirements: Excellent written, oral, presentation and facilitation skills Ability to coordinate multiple projects and initiatives simultaneously through effective prioritization, organization, flexibility, and self-discipline. Must have demonstrated project management experience. Knowledge of firm’s reporting tools and processes. Proactive, organized, and self-sufficient with ability to priorities and multitask. Analyses complex or unusual problems and can deliver insightful and pragmatic solutions. Ability to quickly and easily create/ gather/ analyze data from a variety of sources. A robust and resilient disposition able to encourage discipline in team behaviors What We Look For A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name EY- Assurance – Senior - Position Details As part of EY GDS Assurance Digital, you will be responsible for implementing innovative ideas through AI research to develop high growth & impactful products. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. You will work with multi-disciplinary teams across the entire region to support global clients. This is a core full-time AI developer role, responsible for creating innovative solutions by applying AI based techniques for business problems. As our in-house senior AI engineer, your expertise and skills will be vital in our ability to steer one of our Innovation agenda. Responsibilities Requirements (including experience, skills, and additional qualifications) Convert business problem into analytical problem and devise a solution approach. Clean, aggregate, analyze and interpret the data to derive business insights from it. Own the AI/ML implementation process: Model Design, Feature Planning, Testing, Production Setup, Monitoring, and release management. Work closely with the Solution Architects in deployment of the AI POC’s and scaling up to production level applications. Should have solid background in Python and has deployed on open-source models- Work on data extraction techniques from complex PDF/Word Doc/Forms- entities extraction, table extraction, information comparison. Key Requirements/Skills & Qualification: Excellent academic background, including at a minimum a bachelor or a master’s degree in data science, Business Analytics, Statistics, Engineering, Operational Research, or other related field with strong focus on modern data architectures, processes, and environments. Solid background in Python with excellent coding skills. 4+ years of core data science experience in one or more below areas: Machine Learning (Regression, Classification, Decision Trees, Random Forests, Timeseries Forecasting and Clustering) Understanding and usage of Large Language Models like Open AI models like ChatGPT, GPT4, frameworks like LangChain and Llama Index. Good understanding of open source LLM framework like Mistral, Llama, etc and fine tuning on custom datasets. Deep Learning (DNN, RNN, LSTM, Encoder-Decoder Models) Natural Language Processing- Text Summarization, Aspect Mining, Question Answering, Text Classification, NER, Language Translation, NLG, Sentiment Analysis, Sentence Computer Vision- Image Classification, Object Detection, Tracking etc SQL/NoSQL Databases and its manipulation components Working knowledge of API Deployment (Flask/FastAPI/Azure Function Apps) and webapps creation, Docker, Kubernetes. Additional skills requirements: Excellent written, oral, presentation and facilitation skills Ability to coordinate multiple projects and initiatives simultaneously through effective prioritization, organization, flexibility, and self-discipline. Must have demonstrated project management experience. Knowledge of firm’s reporting tools and processes. Proactive, organized, and self-sufficient with ability to priorities and multitask. Analyses complex or unusual problems and can deliver insightful and pragmatic solutions. Ability to quickly and easily create/ gather/ analyze data from a variety of sources. A robust and resilient disposition able to encourage discipline in team behaviors What We Look For A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

We are looking for a Senior Software Engineer – AI with 3+ years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment—fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. Roles and Responsibilities Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems—autonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-end—from research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Skill Set: 3–6 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment—self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Python Developer – AI Agent Development (CrewAI + LangChain) Location: Noida / Gwalior (On-site) Experience Required: Minimum 3+ years Employment Type: Full-time 🚀 About the Role We're seeking a Python Developer with hands-on experience in CrewAI and LangChain to join our cutting-edge AI product engineering team. If you thrive at the intersection of LLMs, agentic workflows, and autonomous tooling — this is your opportunity to build real-world AI agents that solve complex problems at scale. You’ll be responsible for designing, building, and deploying intelligent agents that leverage prompt engineering, memory systems, vector databases, and multi-step tool execution strategies. 🧠 Core Responsibilities Design and develop modular, asynchronous Python applications using clean code principles. Build and orchestrate intelligent agents using CrewAI: defining agents, tasks, memory, and crew dynamics. Develop custom chains and tools using LangChain (LLMChain, AgentExecutor, memory, structured tools). Implement prompt engineering techniques like ReAct, Few-Shot, and Chain-of-Thought reasoning. Integrate with APIs from OpenAI, Anthropic, HuggingFace, or Mistral for advanced LLM capabilities. Use semantic search and vector stores (FAISS, Chroma, Pinecone, etc.) to build RAG pipelines. Extend tool capabilities: web scraping, PDF/document parsing, API integrations, and file handling. Implement memory systems for persistent, contextual agent behavior. Leverage DSA and algorithmic skills to structure efficient reasoning and execution logic. Deploy containerized applications using Docker, Git, and modern Python packaging tools. 🛠️ Must-Have Skills Python 3.x (Async, OOP, Type Hinting, Modular Design) CrewAI (Agent, Task, Crew, Memory, Orchestration) – Must Have LangChain (LLMChain, Tools, AgentExecutor, Memory) Prompt Engineering (Few-Shot, ReAct, Dynamic Templates) LLMs & APIs (OpenAI, HuggingFace, Anthropic) Vector Stores (FAISS, Chroma, Pinecone, Weaviate) Retrieval-Augmented Generation (RAG) Pipelines Memory Systems: BufferMemory, ConversationBuffer, VectorStoreMemory Asynchronous Programming (asyncio, LangChain hooks) DSA / Algorithms (Graphs, Queues, Recursion, Time/Space Optimization) 💡 Bonus Skills Experience with Machine Learning libraries (Scikit-learn, XGBoost, TensorFlow basics) Familiarity with NLP concepts (Embeddings, Tokenization, Similarity scoring) DevOps familiarity (Docker, GitHub Actions, Pipenv/Poetry) 🧭 Why Join Us? Work on cutting-edge LLM agent architecture with real-world impact. Be part of a fast-paced, experiment-driven AI team. Collaborate with passionate developers and AI researchers. Opportunity to build from scratch and influence core product design. If you're passionate about building AI systems that can reason, act, and improve autonomously — we’d love to hear from you! 📩 Drop your resume and GitHub to sameer.khan@techcarrel.com. Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Blend is hiring a Lead Data Scientist (Generative AI) to spearhead the development of advanced AI-powered classification and matching systems on Databricks. You will contribute to flagship programs like the Diageo AI POC by building RAG pipelines, deploying agentic AI workflows, and scaling LLM-based solutions for high-precision entity matching and MDM modernization. Key Responsibilitie s Design and implement end-to-end AI pipelines for product classification, fuzzy matching, and deduplication using LLMs, RAG, and Databricks-native workflow s.Develop scalable, reproducible AI solutions within Databricks notebooks and job clusters, leveraging Delta Lake, MLflow, and Unity Catalo g.Engineer Retrieval-Augmented Generation (RAG) workflows using vector search and integrate with Python-based matching logi c.Build agent-based automation pipelines (rule-driven + GenAI agents) for anomaly detection, compliance validation, and harmonization logi c.Implement explainability, audit trails, and governance-first AI workflows aligned with enterprise-grade MDM need s.Collaborate with data engineers, BI teams, and product owners to integrate GenAI outputs into downstream system s.Contribute to modular system design and documentation for long-term scalability and maintainabilit y. Qualificati ons Bachelor’s/Master’s in Computer Science, Artificial Intelligence, or related fi eld.7+ years of overall Data Science experience with 2+ years in Generative AI / LLM-based applicati ons.Deep experience with Databricks ecosystem: Delta Lake, MLflow, DBFS, Databricks Jobs & Workfl ows.Strong Python and PySpark skills with ability to build scalable data pipelines and AI workflows in Databri cks.Experience with LLMs (e.g., OpenAI, LLaMA, Mistral) and frameworks like LangChain or LlamaIn dex.Working knowledge of vector databases (e.g., FAISS, Chroma) and prompt engineering for classification/retrie val.Exposure to MDM platforms (e.g., Stibo STEP) and familiarity with data harmonization challen ges.Experience with explainability frameworks (e.g., SHAP, LIME) and AI audit tool ing. Preferred S kills Knowledge of agentic AI architectures and multi-agent orchestr ation.Familiarity with Azure Data Hub and enterprise data ingestion frame works.Understanding of data governance, lineage, and regulatory compliance in AI sy stems. Thrive & Grow w ith Us: Competitiv e Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to th e table.Dynamic Caree r Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career.Id ea Tanks: Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future.Growt h Chats: Dive into our casual "Growth Chats" where you can learn from the best—whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills.Sn ack Zone: Stay fuelled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing.Recognition & Rewards: We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program.Fuel Your Growth Journey with Certif ications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifi cations. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Ready to revolutionize the future of software development? At IgniteTech, we're not just keeping pace with the AI revolution - we're leading it. In an industry where 30% of projects still miss their deadlines due to traditional development bottlenecks, we're crafting a new paradigm that merges cutting-edge GenAI with software engineering excellence. This isn't your typical software engineering role. As our AI Software Engineer, you'll be pioneering the integration of artificial intelligence into the very fabric of software development. Imagine being at the helm of innovations that don't just incrementally improve but fundamentally transform how software is conceived, architected, and delivered. If you're someone who gets excited about emerging technologies, thrives on pushing boundaries, and wants to be part of a team that's redefining industry standards, we want to talk to you. This role goes beyond conventional coding - it's about shaping the future of software development through the lens of artificial intelligence. What You Will Be Doing Your mission will be twofold: Architect the Future: Create groundbreaking AI-powered systems that revolutionize software development automation, from intelligent architecture generation to sophisticated predictive coding frameworks. Stay at the Cutting Edge: Immerse yourself in the AI technology landscape through intensive R&D, knowledge sharing at prestigious conferences, and active participation in both academic and professional technology communities. What You Won’t Be Doing To be clear, this role transcends traditional development work: You won't be bogged down by routine maintenance tasks or basic debugging assignments This isn't about conventional software development or implementing standard features - we're focused on AI-driven innovation and transformation AI Software Engineer Key Responsibilities Your core mission is to leverage your engineering prowess to: Drive transformative improvements in development velocity Minimize human error through AI-powered solutions Elevate code quality standards Accelerate product time-to-market Enhance overall customer satisfaction through superior software delivery Basic Requirements To succeed in this role, you'll need: A proven track record of 3+ years driving impactful engineering initiatives Hands-on experience with modern AI coding assistants (Github Copilot, Cursor.sh, v0.dev) Demonstrated success in implementing Generative AI solutions Practical experience working with various LLM platforms (GPT, Claude, Mistral) to address real-world business challenges About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5269-IN-Pune-AISoftwareEngi.004 Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Software Engineer – AI (Prompt Engineering & Full Stack) Location: Remote / United States / India Employment Type: Full-Time Product Stage: (Stealth Mode) AI Product Platform for Healthcare About Us: We’re a growing healthtech startup on a mission to disrupt the $500B+ US Healthcare Operations & Management space using cutting-edge AI tools. Our goal is to build a modern, intelligent platform that eliminates administrative waste and improves healthcare operations. We’re backed by the industry's best investors and SMEs, and we're building an elite team of engineers, designers, and domain experts. We are on a mission to help Clinicians and patients so that their administrative burden is simplified and the patients have the best outcomes. The Role: We’re looking for an experienced Software Engineer – AI who thrives in ambiguity, moves fast, and is passionate about applying AI (& concepts...) to solve real-world problems. You’ll play a foundational role in shaping both the product and the engineering culture. This role blends Prompt Engineering, LLM application design, and Full Stack Development to help build a next-generation healthcare intelligence engine to solve real-life problems for hospitals and health systems. What You’ll Do: Design and develop prompt architectures for LLM-based workflows Engineer and fine-tune prompt chains for optimal performance, accuracy, and reliability Collaborate with domain experts to understand the nuances of healthcare and translate them into intelligent AI interactions Develop and deploy secure, scalable full-stack features – from front-end UIs to back-end services and APIs Integrate AI capabilities into product workflows using tools like LangChain, OpenAI APIs, or open-source LLMs Work closely with the founding team to iterate quickly and bring the product to real-life operations Skills & Experience: Required: 3–7 years of experience in software engineering, ideally in AI or health-tech startups Strong grasp of Prompt Engineering for LLMs (e.g., GPT-4, Claude, Mistral, etc.) Experience with Full Stack Development using modern frameworks (e.g., React, Next.js, Node.js, Python, Flask, or FastAPI) Familiarity with AI tooling (LangChain, Pinecone, vector databases, etc.) Understanding of HIPAA and secure data handling practices Experience shipping production-grade code in a fast-paced, agile environment Good to have (+plus points) Knowledge of US Healthcare operations (e.g., CPT, ICD-10 codes, SNOMED, EDI 837/835, payer, provider workflows) Experience with healthcare data formats (HL7, FHIR, EDI) Experience with DevOps and cloud deployment (AWS/GCP/Azure) Why Join Us: Work on a mission-critical, AI-first product that impacts real healthcare outcomes Join at ground zero and shape the technical and product direction Competitive compensation, meaningful equity, and flexible work environment Build with the latest in AI – and solve problems the world hasn’t cracked yet If you’re excited to use AI to fix the broken machinery of healthcare, contact us asap! Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description UAE-based ZySec AI provides cutting-edge cybersecurity solutions to help enterprises tackle evolving security challenges at scale. Utilizing an autonomous AI workforce, ZySec AI enhances operational efficiency by automating repetitive, resource-intensive tasks, enabling security teams to focus on strategic priorities. Our mission is to make AI more efficient, accessible, and private for security professionals.mWe're building the future of Autonomous Data Intelligence at CyberPod AI and were looking for a deeply technical, hands-on AI Engineer to push the boundaries of whats possible with Large Language Models (LLMs). This role is for someone whos already been in the trenches: fine-tuned foundation models, experimented with quantization and performance tuning, and knows PyTorch inside out. If youre passionate about optimizing LLMs, crafting efficient reasoning architectures, and contributing to open-source communities like Hugging Face, this is your playground. Role Description Fine-tune Large Language Models (LLMs) on custom datasets for specialized reasoning tasks. Design and run benchmarking pipelines across accuracy, speed, token throughput, and energy efficiency. Implement quantization, pruning, and distillation techniques for model compression and deployment readiness. Evaluate and extend agentic RAG (Retrieval-Augmented Generation) pipelines and reasoning agents. Contribute to SOTA model architectures for multi-hop, temporal, and multimodal reasoning. Collaborate closely with the data engineering, infra, and applied research teams to bring ideas from paper to production. Own and drive experiments, ablations, and performance dashboards end-to-end. Requirements Hands-on experience working with deep learning and large models, particularly LLMs. Strong understanding of PyTorch internals: autograd, memory profiling, efficient dataloaders, mixed precision. Proven track record in fine-tuning LLMs (e.g., LLaMA, Falcon, Mistral, Open LLaMA, T5, etc.) on real-world use cases. Benchmarking skills: can run standardized evals (e.g., MMLU, GSM8K, HELM, TruthfulQA) and interpret metrics. Deep familiarity with quantization techniques: GPTQ, AWQ, QLoRA, bitsandbytes, and low-bit inference. Working knowledge of Hugging Face ecosystem (Transformers, Accelerate, Datasets, Evaluate). Active Hugging Face profile with at least one public model/repo published. Experience in training and optimizing multi-modal models (vision-language/audio) is a big plus. Published work (arXiv, GitHub, blogs) or open-source contributions preferred. If you are passionate about AI and want to be a part of a dynamic and innovative team, then ZySec AI is the perfect place for you. Apply now and join us in shaping the future of artificial intelligence. Show more Show less

Posted 3 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

India

Remote

Linkedin logo

Job description Persevex is a cutting-edge Edtech startup focused on building intelligent, autonomous AI agents that collaborate in multi-agent systems. We create agent-based architectures that enable autonomous decision-making and seamless cooperation to solve complex problems. Join us to help pioneer the future of decentralized AI! We are a fast-growing Edtech company driven by innovation, collaboration, and adaptability. Our mission is to deliver cutting-edge solutions that align with market demands and technical feasibility. Role Overview As a Multi-Agent Systems Architect at Persevex, you will design and develop multi-agent architectures that empower AI agents to work together autonomously. You will be responsible for creating scalable, robust systems that enable agents to communicate, negotiate, and collaborate effectively, driving innovation in AI-driven automation. Key Responsibilities • Design and implement multi-agent system architectures that enable autonomous decision- making and collaboration among AI agents. • Develop agent-based frameworks that support task allocation, communication protocols, and coordination strategies. • Build and optimize agent communication layers using APIs, vector databases, and messaging protocols. • Integrate large language models (LLMs) and other AI components into agent workflows to enhance capabilities. • working directly with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, etc.). • Collaborate closely with product, engineering, and research teams to translate business requirements into technical solutions. • Ensure scalability, reliability, and fault tolerance of multi-agent systems in production environments. • Continuously research and apply the latest advances in multi-agent systems, decentralized AI, and autonomous agents. • Document architecture designs, workflows, and implementation details clearly for team collaboration and future reference. What We’re Looking For • Practical experience designing and building multi-agent systems or agent-based architectures. • Proficiency in Python and familiarity with AI/ML frameworks (e.g., LangChain, AutoGen, HuggingFace). • Understanding of decentralized control, agent communication protocols, and emergent system design. • Experience with cloud platforms (AWS, GCP, Azure) and API integrations. • Strong problem-solving skills and ability to work independently in a remote startup environment. • No formal degree required - your skills, projects, and passion matter most. Location - 100% Remote Experience - 0-1 year Compensation Structure : This role follows a structured pathway designed to prepare candidates for the responsibilities of a full-time position. • Pre-Qualification Internship (Mandatory): • Duration: 2 months • Stipend: ₹5,000/month • Objective: To evaluate foundational skills, work ethic, and cultural fit within the organization. • Internship (Mandatory) • Duration: 4 months • Stipend: ₹5,000–₹15,000/month (based on performance during the pre-qualification internship) Why Join Persevex? • Work remotely with a passionate, innovative startup. • Contribute to pioneering multi-agent AI systems shaping the future of autonomous technology. • Grow your career from internship to full-time with competitive pay and equity opportunities. • Career Growth: Prove your potential and secure a full-time role with competitive compensation. Note: This is not a direct full-time job opportunity. Candidates must commit to our mandatory two- stage internship process. If you’re genuinely interested in joining us, we’d love to hear from you! Ready to build the future of autonomous AI? Apply now and join Persevex mission! Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

About Us : We are a technology-driven publishing and legal-tech organization currently expanding our capabilities through advanced AI initiatives. With a strong internal team proficient in .NET, MERN, MEAN stacks, and recent chatbot development using OpenAI and LLaMA, we are seeking an experienced AI Solution Architect to lead the design, development, and deployment of transformative AI projects. Role Overview : We are looking for a Solution Architect with proven experience in AI/ML system design, team leadership, and successful delivery of Vector DB-based, RAG-enabled, and LLM-tuned solutions. You will act as both a technical leader and a strategic enabler, guiding developers across technologies to adopt best practices in integrating AI into products and internal processes. This is a critical onsite role, working directly with engineering managers and developers across technologies (including .NET, Python, MERN/MEAN), ensuring scalable, maintainable, and innovative AI-driven systems. Key Responsibilities : Architect, plan, and deliver enterprise-grade AI solutions using : Vector Databases (e.g., FAISS, Chroma) RAG Pipelines (Retrieval-Augmented Generation) LLM Pre-training, Fine-tuning, and RLHF (Reinforcement Learning with Human Feedback) Lead AI solution design discussions and provide actionable architecture blueprints. Collaborate with engineering teams (Python, .NET, MERN, MEAN) to integrate AI modules into existing and new systems. Evaluate and select AI/ML frameworks, infrastructure, and tools suited for business needs. Guide and mentor junior AI engineers and full-stack developers on technical direction, code quality, and solution robustness. Define and implement best practices for model training, versioning, deployment, and monitoring. Translate business needs into technical strategies for AI adoption across products and internal processes. Stay ahead of emerging AI trends and continuously assess their applicability to business goals. Required Skills & Qualifications : Bachelors or Masters in Computer Science, Engineering, AI/ML, or related fields. 5+ years of experience in software architecture with a focus on AI/ML system design and project delivery. Hands-on expertise in : Vector Search Engines (e.g., FAISS, Pinecone, Chroma) LLM Tuning & Deployment (e.g., LLaMA, GPT, Mistral, etc.) RAG systems, Prompt Engineering, LangChain RLHF techniques and supervised finetuning Experience working with multi-tech teams (especially .NET and JavaScript-based stacks). Strong command over architecture principles, scalability, microservices, and API-based integration. Excellent team leadership, mentorship, and stakeholder communication skills. Prior experience delivering at least one production-grade AI/NLP project with measurable impact. Nice to Have : Experience with tools like MLflow, Weights & Biases, Docker, Kubernetes, Azure/AWS AI services Exposure to data pipelines, MLOps, and AI governance Familiarity with enterprise software lifecycle and DevSecOps practices Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We're looking for a Senior Manager, Engineering This role is Office Based, Hyderabad Office As a Manager or Senior Manager you will lead and grow a high-performing engineering team within a product-driven environment. This individual will be responsible for overseeing the development, delivery, and support of scalable, robust, and high-quality software products. The ideal candidate will have expertise in Ruby on Rails (RoR), React, AWS, and API design, with a proven ability to inspire and manage engineers to deliver exceptional results. In this role you will.. Team Leadership And Development Lead a team of software engineers, providing mentorship, technical guidance, and career development. Foster a culture of collaboration, innovation, and continuous improvement within the team. Technical Oversight Drive the architecture and development of robust, scalable, and maintainable solutions using Ruby on Rails, React, AWS, and RESTful APIs. Conduct design and code reviews to ensure high standards of quality and maintainability. Product Delivery Collaborate with Product Managers, UX designers, and other stakeholders to define product roadmaps and technical requirements. Oversee sprint planning, resource allocation, and project execution to meet deadlines and deliverables. Operational Excellence Implement best practices in DevOps, CI/CD, and cloud infrastructure management to ensure high system availability and performance. Monitor and optimize application performance, scalability, and security. Stakeholder Engagement Act as a liaison between the engineering team and business stakeholders, translating business needs into technical solutions. Communicate project progress, risks, and outcomes to leadership and stakeholders. Technical Expertise You’ve Got What It Takes If You Have… Strong hands-on experience with Ruby on Rails, React, and AWS services (e.g., EC2, S3, RDS, Lambda). Proficiency in designing and implementing RESTful APIs and microservices architecture. Leadership Experience Proven track record of leading and managing engineering teams in a product company environment. Ability to inspire and motivate team members to achieve their highest potential. Product Mindset Deep understanding of the product lifecycle, from ideation to deployment and iteration. Experience collaborating with cross-functional teams to deliver customer-centric solutions. Operational Skills Strong background in Agile methodologies, sprint planning, and resource management. Knowledge of DevOps practices, including CI/CD pipelines and cloud infrastructure automation. GenAI Experience Exposure to Generative AI tools and technologies, such as OpenAI, Anthropic Claude, Google Gemini, Mistral, Meta Llama, and Hugging Face Transformers. Experience using GenAI-powered tools (e.g., GitHub Copilot, Tabnine, Codeium, Amazon CodeWhisperer) to improve developer productivity. Successfully leveraged GenAI for code generation, automated testing, bug fixes, and software documentation. Integrated AI-based assistants into daily workflows to enhance efficiency in task management, knowledge retrieval, and decision-making. Implemented GenAI-powered chatbots, virtual assistants, and recommendation engines to optimize internal processes and customer experiences. Advocated for the responsible use of GenAI to ensure secure, ethical, and compliant AI-driven solutions. Communication Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to non-technical audiences. Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook ! Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]” Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

About Valorant Valorant is a fast-growing procurement consulting firm helping mid-market and PE-backed companies transform operations. We’re now launching our next chapter: building AI products to radically automate and augment procurement workflows. This isn’t about chatbot demos. We’re building real enterprise software with real client data, solving real problems. About the Role As our Founding Full-Stack AI Engineer / Technical Product Lead , you’ll drive the design, development, and launch of intelligent agentic systems that blend LLMs, vector search, structured data, and enterprise workflows. You’ll work closely with domain experts, iterate fast, and own the tech stack end to end—from backend services to frontend interfaces. This is a zero-to-one opportunity to build production-grade AI tools that work at scale and matter to real businesses. What You’ll Do Architect and build AI-powered products for procurement and supply chain use cases Develop LLM features using RAG (Retrieval-Augmented Generation), prompt engineering, and custom context pipelines Implement semantic document search using vector databases (Chroma, FAISS, etc.) Build Python backend services for data ingestion, transformation, and orchestration of AI pipelines Work with structured enterprise data (e.g., ledgers, SaaS exports, CSVs) to extract insights and power analytics Design or collaborate on frontend development for dashboards, chat interfaces, and user-facing tools (React or similar) Translate complex workflows into clean, intuitive UX with strong usability principles Ensure enterprise-grade reliability, explainability, and data privacy Contribute to product roadmap, feature planning, and fast iteration cycles with consultants and PMs Take ownership of the full stack and help shape a modern, scalable AI-first architecture What We’re Looking For 5–7+ years of experience in software engineering, full-stack development, or AI/ML product engineering Hands-on experience shipping LLM features in production (OpenAI, Claude, LLaMA, Mistral, etc.) Strong Python skills; experience with LangChain, LLaMA Index, or similar frameworks Experience with vector search, semantic indexing, and chunking strategies Backend engineering experience: designing modular APIs, microservices, and orchestration layers (FastAPI, Flask, Django, etc.) Proficiency in frontend development using React, Vue, or similar frameworks Understanding of UI/UX principles and ability to turn workflows into usable interfaces Familiarity with structured data workflows: pandas, SQL, and validation pipelines Exposure to cloud environments and dev tooling (Docker, GitHub Actions, AWS/GCP) Pragmatic, product-focused mindset — values useful outputs over academic perfection Bonus: Domain experience in procurement, supply chain, legal tech, or enterprise SaaS Bonus: Experience mentoring junior engineers or contributing to team scaling Why Join Us? Build meaningful AI products that solve real problems — not just tech showcases Collaborate with domain experts and access rich real-world data from day one Operate with autonomy, fast iteration cycles, and strong strategic backing Shape the tech foundation of an ambitious, AI-native product company Competitive pay, flexible remote work, and potential equity for high-impact contributors Show more Show less

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Gwalior, Madhya Pradesh

On-site

Indeed logo

Job Title: Full Stack Developer (Angular 18, Mistral AI, Prompt Engineering) Experience: 3+ Years Location: Gwalior (work from office) Job Type: Full time / Permanent About Us: Synram Software Services Pvt. Ltd., a subsidiary of the renowned FG International GmbH, Germany, is a premier IT solutions provider specializing in ERP systems, E-commerce platforms, Mobile Applications, and Digital Marketing. We are committed to delivering tailored solutions that drive success across various industries. About the Role: We are seeking a dynamic and experienced Full Stack Developer with a strong foundation in Angular 18 , Mistral AI tools , and prompt engineering , complemented by solid full stack development skills. The ideal candidate should be enthusiastic about building intelligent, scalable applications and comfortable working across both frontend and backend technologies. Key Responsibilities: Develop, test, and maintain responsive web applications using Angular 18 . Design and implement RESTful APIs and backend services using modern stacks (Node.js, Express, MongoDB/MySQL, etc.). Integrate AI tools like Mistral into web and enterprise-level applications. Design and optimize prompts to fine-tune and interact with AI models for varied use cases. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Write clean, scalable, and maintainable code with proper documentation. Participate in code reviews, testing, and debugging. Stay updated with emerging trends in AI, prompt engineering, and full stack development. Required Skills: 3+ years of full stack development experience. Proficiency in Angular 18 (or 15+ with quick adaptability to v18). Experience with backend technologies such as Node.js, Express.js, Python (optional) . Solid understanding of Mistral AI tools and their practical applications. Proficiency in prompt engineering for large language models (LLMs). Strong knowledge of HTML, CSS, TypeScript, JavaScript , and modern frontend practices. Familiarity with NoSQL/SQL databases (MongoDB, PostgreSQL, etc.). Version control using Git . Understanding of DevOps practices and deployment processes. Nice to Have: Experience with cloud platforms (AWS, Azure, GCP). Exposure to AI/ML integration in real-world products. Understanding of containerization tools (Docker, Kubernetes). Familiarity with CI/CD pipelines. Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Why Join Us? Work with cutting-edge AI and modern tech stacks. Opportunity to innovate and work on impactful projects. Collaborative and inclusive team culture. Flexible working hours If you're ready to take on new challenges and join a team that values innovation and creativity, apply now! Mail us: career@synram.co or Call us on +91-9111381555 Job Types: Full-time, Permanent Pay: ₹25,681.85 - ₹40,003.00 per month Benefits: Flexible schedule Health insurance Leave encashment Schedule: Day shift Fixed shift Weekend availability Supplemental Pay: Overtime pay Performance bonus Yearly bonus Ability to commute/relocate: Gwalior, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: Full-stack development: 3 years (Preferred) Language: English (Preferred) Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 26/05/2025

Posted 3 weeks ago

Apply

6.0 - 11.0 years

40 - 60 Lacs

Kolkata

Work from Office

Naukri logo

We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. Youll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Key Responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcedeskglobal.com. Include a short note on your most interesting AI project or voicebot/conversational AI experience.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Senior Technical Architect – Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks. What We Offer Joining QX Global Group means becoming part of a creative team where you can personally grow and contribute to our collective goals. We offer competitive salaries, comprehensive benefits, and a supportive environment that values work-life balance. Work Model Location: Ahmedabad Model: WFO Shift Timings: 12:30PM-10PM IST / 1:30PM -11PM IST Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

🚀 Job Title: Senior AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Senior AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-4 years of experience in building and deploying AI/ML systems, with at least 1-2 years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Senior AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

We are building an intelligent, emotionally aware AI advisory ecosystem with secure agent-to-agent communication. We are looking for experienced AI/ML Engineers who are passionate about conversational AI, LLM fine-tuning, and self-learning systems to join our core technical team. You will play a critical role in building, training, and refining our models for real-time advisory generation and agent collaboration. Key Responsibilities: ● Design, fine-tune, and evaluate LLM-based models (e.g., LLaMA, Mistral) for advisory use cases. ● Implement adaptive learning mechanisms for model self-improvement from user interaction data. ● Develop and optimize vector search using Pinecone or FAISS for memory/context retrieval. ● Work on sentiment detection and emotion-aware responses using NLP and affective computing. ● Collaborate with backend and product teams to integrate ML pipelines into production. ● Ensure models adhere to ethical AI practices, privacy laws (e.g., DPDP), and fairness standards. Required Skills & Experience: ● Proven experience with PyTorch, HuggingFace Transformers, or equivalent ML frameworks. ● Strong foundation in NLP, LLMs, and embedding-based search. ● Experience with model training, fine-tuning, and performance benchmarking. ● Familiarity with vector databases (e.g., Pinecone, FAISS) and retrieval-augmented generation. ● Understanding of security, privacy, and bias mitigation in ML. ● Experience with AWS/GCP/ML Ops is a plus. Nice to Have: ● Experience with agent-based architectures or multi-agent systems. ● Published research or open-source contributions in NLP or ML. To apply for this position, please create or update your profile at: 👉 https://candidate.hirenema.com Feel free to message me if you have any questions! Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Work From Office only- Jaipur, Rajasthan Must have experience: 4 year+ Should be strongly skilled in FastAPI, RAG, LLM, Generative AI About the Role: We are seeking a hands-on and experienced Data Scientist with deep expertise in Generative AI to join our AI/ML team. You will be instrumental in building and deploying machine learning solutions, especially GenAI-powered applications. Key Responsibilities: - Design, develop, and deploy scalable ML and GenAI solutions using LLMs, RAG pipelines, and advanced NLP techniques. - Implement GenAI use cases involving embeddings, summarization, semantic search, and prompt engineering. - Fine-tune and serve LLMs using frameworks like vLLM, LoRA, and QLoRA; deploy on cloud and on-premise environments. - Build inference APIs using FastAPI and orchestrate them into robust services. - Utilize tools and frameworks such as LangChain, LlamaIndex, ONNX, Hugging Face, and VectorDBs (Qdrant, FAISS). - Collaborate closely with engineering and business teams to translate use cases into deployed solutions. - Guide junior team members, provide architectural insights, and ensure best practices in MLOps and model lifecycle. - Stay updated on latest research and developments in GenAI, LLMs, and NLP. Required Skills and Experience: - 4-8 years of hands-on experience in Data Science/Machine Learning, with a strong focus on NLP and Generative AI. - Proven experience with LLMs (LLaMA 1/2/3, Mistral, FLAN T5) and concepts like RAG, fine-tuning, embeddings, chunking, reranking, and prompt optimization. - Experience with LLM APIs (OpenAI, Hugging Face) and open-source model deployment. - Proficiency in LangChain, LlamaIndex, and FastAPI. - Understanding of cloud platforms (AWS/GCP) and certification in a cloud technology is preferred. - Familiarity with MLOps tools and practices for CI/CD, monitoring, and retraining of ML models. - Ability to read and interpret ML research papers and LLM architecture diagrams. Show more Show less

Posted 3 weeks ago

Apply

Exploring Mistral Jobs in India

The mistral job market in India is thriving, with a growing demand for professionals skilled in this area. Mistral jobs are diverse and can range from software development to data analysis to project management. Job seekers looking to explore opportunities in this field have a wide array of options to choose from.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their vibrant tech ecosystem and are home to many companies actively hiring for mistral roles.

Average Salary Range

The salary range for mistral professionals in India varies based on experience and skill level. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

In the mistral field, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and eventually moving into management positions like Project Manager or IT Director.

Related Skills

In addition to mistral expertise, professionals in this field are often expected to have skills such as:

  • Programming languages (e.g., Python, Java)
  • Data analysis
  • Problem-solving
  • Project management

Interview Questions

  • What is Mistral and how is it used in the industry? (basic)
  • Can you explain the difference between Mistral and other similar tools? (medium)
  • How would you handle a complex data analysis project using Mistral? (advanced)
  • What are some common challenges faced when working with Mistral and how do you overcome them? (medium)
  • Describe a project where you successfully implemented Mistral to improve efficiency. (medium)
  • How do you stay updated with the latest trends and advancements in Mistral technology? (basic)
  • Can you walk us through your experience with Mistral workflow automation? (advanced)
  • Have you ever had to troubleshoot any issues with Mistral? How did you approach it? (medium)
  • What are some best practices you follow when working with Mistral workflows? (medium)
  • How do you prioritize tasks and deadlines when working on multiple Mistral projects simultaneously? (medium)
  • Explain a situation where you had to collaborate with a cross-functional team on a Mistral project. How did you ensure smooth communication and coordination? (medium)
  • What is your approach to testing Mistral workflows to ensure they meet quality standards? (medium)
  • Can you provide an example of a Mistral project where you had to make quick decisions under pressure? (advanced)
  • How do you handle feedback and incorporate it into your Mistral projects? (basic)
  • Describe a situation where you had to train a team member on Mistral. How did you approach it? (medium)
  • What are some key metrics you track to measure the success of a Mistral project? (basic)
  • Have you ever had to customize Mistral workflows to meet specific project requirements? If so, how did you approach it? (advanced)
  • How do you ensure data security and compliance when working with sensitive information in Mistral workflows? (medium)
  • Can you explain the role of Mistral in the overall workflow automation process? (basic)
  • What are some common pitfalls to avoid when working with Mistral? (medium)
  • Describe a project where you had to optimize Mistral workflows for better performance. (advanced)
  • How do you handle conflicts or disagreements with team members during Mistral projects? (medium)
  • Explain a challenging Mistral project you worked on and how you overcame obstacles to deliver results. (advanced)
  • What are your long-term career goals in the mistral field? (basic)

Closing Remark

As you prepare for mistral job opportunities in India, remember to showcase your expertise, experience, and problem-solving skills during interviews. With the right preparation and confidence, you can land a rewarding career in this dynamic field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies