Home
Jobs
Companies
Resume

265 Mistral Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 13 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 18 hours ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 18 hours ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: AI Engineer Want to join a startup, but with the stability of a larger organization? Join our innovation team at HGS that's focused on building SaaS products. If you are highly driven & passionate person who'd like to build highly scalable SaaS products in a startup type of environment, you're welcome to apply. The HGS Digital Innovation Team is designed to create products and solutions relevant for enterprises, discover innovations and to contextualize and experiment with them within a specific industry. This unit provides an environment for the exploration, development, and testing of Cloud-based Digital AI solutions. In addition to that it also looks at rapid deployment at scale and sustainability of these solutions for target business impacts. Job Overview We are seeking an agile AI Engineer with a strong focus on both AI engineering and SaaS product development in a 0-1 product environment. This role is perfect for a candidate skilled in building and iterating quickly, embracing a fail fast approach to bring innovative AI solutions to market rapidly. You will be responsible for designing, developing, and deploying SaaS products using advanced Large Language Models (LLMs) such as Meta, Azure OpenAI, Claude, and Mistral, while ensuring secure, scalable, and high-performance architecture. Your ability to adapt, iterate, and deliver in fast-paced environments is critical. Responsibilities Lead the design, development, and deployment of SaaS products leveraging LLMs, including platforms like Meta, Azure OpenAI, Claude, and Mistral. Support product lifecycle, from conceptualization to deployment, ensuring seamless integration of AI models with business requirements and user needs. Build secure, scalable, and efficient SaaS products that embody robust data management and comply with security and governance standards. Collaborate closely with product management, and other stakeholders to align AI-driven SaaS solutions with business strategies and customer expectations. Fine-tune AI models using custom instructions to tailor them to specific use cases and optimize performance through techniques like quantization and model tuning. Architect AI deployment strategies using cloud-agnostic platforms (AWS, Azure, Google Cloud), ensuring cost optimization while maintaining performance and scalability. Apply retrieval-augmented generation (RAG) techniques to build AI models that provide contextually accurate and relevant outputs. Build the integration of APIs and third-party services into the SaaS ecosystem, ensuring robust and flexible product architecture. Monitor product performance post-launch, iterating and improving models and infrastructure to enhance user experience and scalability. Stay current with AI advancements, SaaS development trends, and cloud technology to apply innovative solutions in product development. Qualifications Bachelor’s degree or equivalent in Information Systems, Computer Science, or related fields. 6+ years of experience in product development, with at least 2 years focused on AI-based SaaS products. Demonstrated experience in leading the development of SaaS products, from ideation to deployment, with a focus on AI-driven features. Hands-on experience with LLMs (Meta, Azure OpenAI, Claude, Mistral) and SaaS platforms. Proven ability to build secure, scalable, and compliant SaaS solutions, integrating AI with cloud-based services (AWS, Azure, Google Cloud). Strong experience with RAG model techniques and fine-tuning AI models for business-specific needs. Proficiency in AI engineering, including machine learning algorithms, deep learning architectures (e.g., CNNs, RNNs, Transformers), and integrating models into SaaS environments. Solid understanding of SaaS product lifecycle management, including customer-focused design, product-market fit, and post-launch optimization. Excellent communication and collaboration skills, with the ability to work cross-functionally and drive SaaS product success. Knowledge of cost-optimized AI deployment and cloud infrastructure, focusing on scalability and performance. Show more Show less

Posted 20 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Software Engineer – AI (Prompt Engineering & Full Stack) Location: Remote / United States / India Employment Type: Full-Time Product Stage: (Stealth Mode) AI Product Platform for Healthcare About Us: We’re a growing healthtech startup on a mission to disrupt the $500B+ US Healthcare Operations & Management space using cutting-edge AI tools. Our goal is to build a modern, intelligent platform that eliminates administrative waste and improves healthcare operations. We’re backed by the industry's best investors and SMEs, and we're building an elite team of engineers, designers, and domain experts. We are on a mission to help Clinicians and patients so that their administrative burden is simplified and the patients have the best outcomes. The Role: We’re looking for an experienced Software Engineer – AI who thrives in ambiguity, moves fast, and is passionate about applying AI (& concepts...) to solve real-world problems. You’ll play a foundational role in shaping both the product and the engineering culture. This role blends Prompt Engineering, LLM application design, and Full Stack Development to help build a next-generation healthcare intelligence engine to solve real-life problems for hospitals and health systems. What You’ll Do: Design and develop prompt architectures for LLM-based workflows Engineer and fine-tune prompt chains for optimal performance, accuracy, and reliability Collaborate with domain experts to understand the nuances of healthcare and translate them into intelligent AI interactions Develop and deploy secure, scalable full-stack features – from front-end UIs to back-end services and APIs Integrate AI capabilities into product workflows using tools like LangChain, OpenAI APIs, or open-source LLMs Work closely with the founding team to iterate quickly and bring the product to real-life operations Skills & Experience: Required: 5–7 years of experience in software engineering, ideally in AI or health-tech startups Strong grasp of Prompt Engineering for LLMs (e.g., GPT-4, Claude, Mistral, etc.) Experience with Full Stack Development using modern frameworks (e.g., React, Next.js, Node.js, Python, Flask, or FastAPI) Familiarity with AI tooling (LangChain, Pinecone, vector databases, etc.) Understanding of HIPAA and secure data handling practices Experience shipping production-grade code in a fast-paced, agile environment Good to have (+plus points) Knowledge of US Healthcare operations (e.g., CPT, ICD-10 codes, SNOMED, EDI 837/835, payer, provider workflows) Experience with healthcare data formats (HL7, FHIR, EDI) Experience with DevOps and cloud deployment (AWS/GCP/Azure) Why Join Us: Work on a mission-critical, AI-first product that impacts real healthcare outcomes Join at ground zero and shape the technical and product direction Competitive compensation, meaningful equity, and flexible work environment Build with the latest in AI – and solve problems the world hasn’t cracked yet If you’re excited to use AI to fix the broken healthcare machinery, contact us ASAP! Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Generative AI Intern – Workflow Automation (Remote) Company: SEO Scientist AI Industry: AI-Powered SEO & Marketing Automation Location: Remote Internship Duration: 3 to 6 Months Stipend: Yes (Based on Skills) About Us: At SEO Scientist AI, we are building autonomous agents that automate SEO and marketing workflows using large language models and real-time tools. Our goal is to replace repetitive manual tasks in SEO with intelligent systems. We're competing with platforms like AirOps, AgenticFlow, Mazzal, and Relevance—and we’re moving fast. What You’ll Work On: Assist in building and testing LLM-based agents for automating SEO and marketing tasks Support integration of tools like Notion, Google Sheets, and Webflow with AI workflows Use platforms like OpenAI, LangChain, and AWS Lambda for building internal prototypes Help write prompt chains, test workflows, and track output accuracy Work closely with product and engineering teams to improve automation pipelines #What We’re Looking For: Hands-on experience or academic projects in Generative AI or LLMs (OpenAI, Claude, Mistral, etc.) Familiarity with Python and basic cloud concepts (AWS preferred) Interest or experience in automating tasks using tools like Zapier, Make.com, or custom scripts Bonus: Exposure to LangChain, LlamaIndex, Pinecone, or similar agent frameworks A curious mind and eagerness to learn how AI is reshaping SEO and content marketing --- Perks: -Remote-first internship with flexible work hours -Learn from engineers, marketers, and AI experts working on real-world agentic use cases -Get hands-on experience with cutting-edge tools in the Generative AI space -Letter of recommendation and priority for full-time roles Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

Job Description We are looking for a skilled LLM / GenAI Expert who can drive innovative AI/ML solutions, and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with deep, hands-on experience in Large Language Models (LLMs), prompt engineering, and GenAI tools and frameworks. Natural Abilities. Smart, self-motivated, responsible and out of the box thinker. Detail oriented and powerful analyzer. Great writing communication skills. Requirements 5+ years of total software development experience with a strong foundation in Python. 2-3+ years of hands-on experience working with GenAI / LLMs, including real-world implementation and deployment. Deep familiarity with models like GPT-4, Claude, Mistral, LLaMA, etc. Strong understanding of prompt engineering, LLM fine-tuning, tokenization, and embedding-based search. Experience with Hugging Face, LangChain, OpenAI API, and vector databases (Pinecone, FAISS, Chroma). Exposure to agent frameworks and multi-agent orchestration. Excellent written and verbal communication skills. Proven ability to lead and mentor team members on technical and architectural decisions. Responsibilities Lead the design and development of GenAI/LLM-based products and solutions. Mentor and support junior engineers in understanding and implementing GenAI/LLM techniques. Work on fine-tuning, prompt engineering, RAG (Retrieval-Augmented Generation), and custom LLM workflows. Integrate LLMs into production systems using Python and frameworks like LangChain, LlamaIndex, Transformers, etc. Explore and apply advanced GenAI paradigms such as MCP, agent2agent collaboration, and autonomous agents. Research, prototype, and implement new ideas and stay current with state-of-the-art GenAI trends. Collaborate closely with product, design, and engineering teams to align LLM use cases with business goals. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY- Assurance – Sr.Manager - Digital Position Details As part of EY GDS Assurance Digital, you will be responsible for implementing innovative ideas through AI research to develop high growth & impactful products. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. You will work with multi-disciplinary teams across the entire region to support global clients. This is a combination of a technical role and a business development role in AI, responsible for creating innovative solutions by applying AI based techniques for business problems. As our in-house senior AI engineer, your expertise and skills will be vital in our ability to steer one of our Innovation agenda. Responsibilities 10-12 years of experience in Data Science, with about 15 years of total experience Talk to businesses across Assurance teams and understand the problem and develop the solution based on problem, infra and cost limitations Respond to AI related RFPs and business cases Ensure team’s utilization and billability – i.e ensure that the projects are lined up for developers on after the other Convert business problem into analytical problem and devise a solution approach Clean, aggregate, analyze and interpret the data to derive business insights from it. Own the AI/ML implementation process: Model Design, Feature Planning, Testing, Production Setup, Monitoring, and release management. Work closely with the Solution Architects in deployment of the AI POC’s and scaling up to production level applications. Should have solid background in Python and has deployed on open-source models- Work on data extraction techniques from complex PDF/Word Doc/Forms- entities extraction, table extraction, information comparison. Key Requirements/Skills & Qualification: Excellent academic background, including at a minimum a bachelor or a master’s degree in data science, Business Analytics, Statistics, Engineering, Operational Research, or other related field with strong focus on modern data architectures, processes, and environments. Solid background in Python with excellent coding skills. 6+ years of core data science experience in one or more below areas: Machine Learning (Regression, Classification, Decision Trees, Random Forests, Timeseries Forecasting and Clustering) Understanding and usage of Large Language Models like Open AI models like ChatGPT, GPT4, function calling, frameworks like LangChain, Llama Index, agents etc. Retriever augmented generation and prompt engineering. Good understanding of open source LLM framework like Mistral, Llama, etc. and fine tuning on custom datasets. Deep Learning (DNN, RNN, LSTM, Encoder-Decoder Models) Natural Language Processing- Text Summarization, Aspect Mining, Question Answering, Text Classification, NER, Language Translation, NLG, Sentiment Analysis. Computer Vision- Image Classification, Object Detection, Tracking etc. SQL/NoSQL Databases and its manipulation components Working knowledge of API Deployment (Flask/FastAPI/Azure Function Apps) and webapps creation, Docker, Kubernetes. Additional skills requirements: Excellent written, oral, presentation and facilitation skills Ability to coordinate multiple projects and initiatives simultaneously through effective prioritization, organization, flexibility, and self-discipline. Must have demonstrated project management experience. Knowledge of firm’s reporting tools and processes. Proactive, organized, and self-sufficient with ability to priorities and multitask. Analyses complex or unusual problems and can deliver insightful and pragmatic solutions. Ability to quickly and easily create/ gather/ analyze data from a variety of sources. A robust and resilient disposition able to encourage discipline in team behaviors What We Look For A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Job Summary: We are looking for a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities: Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc. for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up to date with the latest advancements in LLM architectures, frameworks, and AI trends . Requirements Required Skills & Qualifications: 3-5 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, Hugging Face Transformers . Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph . Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good to Have: Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.) . Knowledge of Knowledge Graphs and Symbolic AI . Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques . Research experience or published work in LLMs, NLP, or Generative AI is a plus. Why Join Us? Opportunity to work on cutting-edge LLM and Generative AI projects . Collaborative and innovative work environment. Competitive salary and benefits. Career growth opportunities in AI and ML research and development. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

About Us Astra is a cybersecurity SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind AI-led offensive Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 13,000+ security tests. CTOs and CISOs love Astra because it helps them to achieve continuous security at scale, fix vulnerabilities in record time, and seamlessly transition from DevOps to DevSecOps with Astra's powerful CI/CD integrations. Astra is loved by 800+ companies across 70+ countries. In 2024 Astra uncovered 2.5 million+ vulnerabilities for its customers, saving customers $110M+ in potential losses due to security vulnerabilities. We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, Dream 11, OLX Autos etc. are a few of Astra’s customers. Job Description This is a remote position. Role Overview As Astra Security’s first AI Engineer, you will play a pivotal role in introducing and embedding AI into our security products. You will be responsible for designing, developing, and deploying AI applications leveraging both open-source models (Llama, Mistral, DeepSeek etc) and proprietary services (OpenAI, Anthropic). Your work will directly impact how AI is used to enhance threat detection, automate security processes, and improve intelligence gathering. This is an opportunity to not only build future AI models but also define Astra Security’s AI strategy, laying the foundation for future AI-driven security solutions. Key Responsibilities Lead the AI integration efforts within Astra Security, shaping the company’s AI roadmap Develop and Optimize Retrieval-Augmented Generation (RAG) Pipelines with multi-tenant capabilities Build and enhance RAG applications using LangChain, LangGraph, and vector databases (e.g. Milvus, Pinecone, pgvector). Implement efficient document chunking, retrieval, and ranking strategies. Optimize LLM interactions using embeddings, prompt engineering, and memory mechanisms. Work with Graph databases (Neo4j or similar) for structuring and querying knowledge bases esign multi-agent workflows using orchestration platforms like LangGraph or other emerging agent frameworks for AI-driven decision-making and reasoning. Integrate vector search, APIs and external knowledge sources into agent workflows. Exposure to end-to-end AI ecosystem like Huggingface to accelerate AI development (while initial work won’t involve extensive model training, the candidate should be ready for fine-tuning, domain adaptation, and LLM deployment when needed) Design and develop AI applications using LLMs (Llama, Mistral, OpenAI, Anthropic, etc.) Build APIs and microservices to integrate AI models into backend architectures.. Collaborate with the product and engineering teams to integrate AI into Astra Security’s core offerings Stay up to date with the latest advancements in AI and security, ensuring Astra remains at the cutting edge What We Are Looking For Exceptional Python skills for AI/ML development Hands-on experience with LLMs and AI frameworks (LangChain, Transformers, RAG-based applications) Strong understanding of retrieval-augmented generation (RAG) and knowledge graphs Experience with AI orchestration tools (LangChain, LangGraph) Familiarity with graph databases (Neo4j or similar) Experience in Ollama for efficient AI model deployment for production workloads is a plus Experience deploying AI models using Docker Hands-on experience with Ollama setup and loading DeepSeek/Llama. Strong problem-solving skills and a self-starter mindset—you will be building AI at Astra from the ground up. Nice To Have Experience with AI deployment frameworks (e.g., BentoML, FastAPI, Flask, AWS) Background in cybersecurity or security-focused AI applications What We Offer Software Engineering Mindset: This role requires a strong software engineering mindset to build AI solutions from 0 to 1 and scale them based on business needs. The candidate should be comfortable designing, developing, testing, and deploying production-ready AI systems while ensuring maintainability, performance, and scalability. Why Join Astra Security? Own and drive the AI strategy at Astra Security from day one Fully remote, agile working environment. Good engineering culture with full ownership in design, development, release lifecycle. A wholesome opportunity where you get to build things from scratch, improve and ship code to production in hours, not weeks. Holistic understanding of SaaS and enterprise security business. Annual trips to beaches or mountains (last one was at Wayanad). Open and supportive culture. Health insurance & other benefits for you and your spouse. Maternity benefits included. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Title : GenAI / ML Engineer Function : Research & Development Location : Delhi/Bangalore (3 days in office) About the Company: Elucidata is a TechBio Company headquartered in San Francisco. Our mission is to make life sciences data AI-ready. Elucidata's Elucidata’s LLM-powered platform Polly, helps research teams wrangle, store, manage and analyze large volumes of biomedical data. We are at the forefront of driving GenAI in life sciences R&D across leading BioPharma companies like Pfizer, Janssen, NextGen Jane and many more. We were recognised as the 'Most Innovative Biotech Company, 2024', by Fast Company. We are a 120+ multi-disciplinary team of experts based across the US and India. In September 2022, we raised $16 million in our Series A round led by Eight Roads, F-Prime, and our existing investors Hyperplane and IvyCap. About the Role: We are looking for a GenAI / ML Engineer to join our R&D team and work on cutting-edge applications of LLMs in biomedical data processing . In this role, you'll help build and scale intelligent systems that can extract, summarize, and reason over biomedical knowledge from large bodies of unstructured text, including scientific publications, EHR/EMR reports, and more. You’ll work closely with data scientists, biomedical domain experts, and product managers to design and implement reliable GenAI-powered workflows — from rapid prototypes to production-ready solutions. This is a highly strategic role as we continue to invest in agentic AI systems and LLM-native infrastructure to power the next generation of biomedical applications. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI. Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral) for biomedical applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations, and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, bioinformaticians, product teams, and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications : 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases. Hands-on experience with LLM frameworks and tooling (e.g., LangChain, HuggingFace, OpenAI APIs, Transformers). Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices. Strong grasp of ML/DL fundamentals and experience with tools like PyTorch, or TensorFlow. Ability to communicate ideas clearly, iterate quickly, and thrive in a fast-paced, product-driven environment. Good to Have (Preferred but Not Mandatory) Experience working with biomedical or clinical text (e.g., PubMed, EHRs, trial data). Exposure to building autonomous agents using CrewAI or LangGraph. Understanding of knowledge graph construction and integration with LLMs. Experience with evaluation challenges unique to GenAI workflows (e.g., hallucination detection, grounding, traceability). Experience with fine-tuning, LoRA, PEFT, or using embeddings and vector stores for retrieval. Working knowledge of cloud platforms (AWS/GCP) and MLOps tools (MLflow, Airflow etc.). Contributions to open-source LLM or NLP tooling We are proud to be an equal-opportunity workplace and are an affirmative action employer. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Our Company Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We create intelligent, user-first digital products that redefine industries through the power of AI and engineering excellence. Role Overview We are looking for a Senior Software Engineer-AI with 4-6 years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment—fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. What we are looking from an ideal candidate? Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems—autonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-end—from research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Preferred Skills What skills do you need? 3–6 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment—self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

We are looking for a skilled LLM / GenAI Expert who can drive innovative AI/ML solutions, and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with deep, hands-on experience in Large Language Models (LLMs), prompt engineering, and GenAI tools and frameworks. This role requires not only technical proficiency but also excellent communication skills and the ability to guide and upskill junior team members. Exposure to cutting-edge concepts like multi-agent collaboration, Memory-Context-Planning (MCP), and agent-to-agent workflows is highly desirable. Job Responsibilities: Lead the design and development of GenAI/LLM-based products and solutions. Mentor and support junior engineers in understanding and implementing GenAI/LLM techniques. Work on fine-tuning, prompt engineering, RAG (Retrieval-Augmented Generation), and custom LLM workflows. Integrate LLMs into production systems using Python and frameworks like LangChain, LlamaIndex, Transformers, etc. Explore and apply advanced GenAI paradigms such as MCP, agent2agent collaboration, and autonomous agents. Research, prototype, and implement new ideas and stay current with state-of-the-art GenAI trends. Collaborate closely with product, design, and engineering teams to align LLM use cases with business goals. Requirements: 4+ years of total software development experience with a strong foundation in Python. 2–3+ years of hands-on experience working with GenAI / LLMs, including real-world implementation and deployment. Deep familiarity with models like GPT-4, Claude, Mistral, LLaMA, etc. Strong understanding of prompt engineering, LLM fine-tuning, tokenization, and embedding-based search. Experience with Hugging Face, LangChain, OpenAI API, and vector databases (Pinecone, FAISS, Chroma). Exposure to agent frameworks and multi-agent orchestration. Excellent written and verbal communication skills. Proven ability to lead and mentor team members on technical and architectural decisions. Experience with cloud platforms (AWS/GCP/Azure) for deploying LLMs at scale. Knowledge of ethical AI practices, bias mitigation, and model interpretability. Background in machine learning, natural language processing (NLP), or AI research. Publications or contributions to open-source LLM projects are a plus. Benefits: Competitive Compensation and Benefits Half Yearly Appraisals Friendly Environment Work-life Balance 5 days working Flexible office timings Employee-friendly leave policies Work from Home (with prior approvals) Show more Show less

Posted 3 days ago

Apply

3.0 - 4.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Linkedin logo

Job Title: AI Research Engineer – Private LLM & Cognitive Systems Location: Delhi Type: Full-Time Experience Level: Senior / Expert Start Date: Immediate About Brainwave Science: Brainwave Science is a leader in cognitive technologies, specializing in solutions for the security and intelligence sectors. Our flagship product, iCognative™ , leverages real-time cognitive response analysis using Artificial Intelligence and Machine Learning techniques to redefine the future of investigations, defense, counterterrorism, and counterintelligence operations. Beyond security, Brainwave Science is at the forefront of healthcare innovation, applying our cutting-edge technology to identifying diseases, various neurological conditions, and mental health challenges in advance and identification of stress and anxiety in real time and providing non-medical, science-backed interventions. Together, we are shaping a future where advanced technology strengthens security, promotes wellness, and creates a healthier, safer world for individuals and communities worldwide. About the Role We are seeking an experienced and forward-thinking AI/ML Engineer – LLM & Deep Learning Expert to design, develop, and deploy Large Language Models (LLMs) and intelligent AI systems. You will work on cutting-edge projects at the intersection of natural language processing , edge AI , and biosignal intelligence , helping drive innovation across defense, security, and healthcare use cases. This role is ideal for someone who thrives in experimental environments, understands private and local LLM deployments, and is passionate about solving real-world challenges using advanced AI. Responsibilities Design, train, fine-tune, and deploy Large Language Models using frameworks like PyTorch , TensorFlow , or Hugging Face Transformers Integrate LLMs for local/edge deployment using tools like Ollama , LangChain , LM Studio , or llama.cpp Build NLP applications for intelligent automation , investigative analytics , and biometric interpretation Optimize models for low-latency , token efficiency , and on-device performance Work on prompt engineering , embedding tuning , and vector search integration (FAISS, Qdrant, Weaviate) Collaborate with technical and research teams to deliver scalable AI-driven features Stay current with developments in open-source and closed-source LLM ecosystems (e.g., Meta, OpenAI, Mistral) Must-Have Requirements B.Tech/M.Tech in Computer Science (CSE) , Electronics & Communication (ECE) , or Electrical & Electronics (EEE) from IIT, NIT, or BITS Minimum 3 - 4 years of hands-on experience in AI/ML, deep learning , and LLM development Deep experience with Transformer models (e.g., GPT, LLaMA, Mistral, Falcon, Claude) Hands-on with tools like LangChain , Hugging Face , Ollama , Docker , or Kubernetes Proficiency in Python , and strong knowledge of Linux environments Strong understanding of NLP , attention mechanisms , and model fine-tuning Preferred Qualifications Experience with biosignals , especially EEG or time-series data Experience deploying custom-trained LLMs on proprietary datasets Familiarity with RAG pipelines and multi-modal models (e.g., CLIP, LLaVA) Knowledge of cloud platforms (AWS, GCP, Azure) for scalable model training and serving Published research, patents, or open-source contributions in AI/ML communities Excellent communication, analytical, and problem-solving skills What We Offer Competitive compensation based on experience Flexible working hours and remote-friendly culture Access to high-performance compute infrastructure Opportunities to work on groundbreaking AI projects in healthcare, security, and defense Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/ B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Job title: AI & Generative AI - Senior Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage ,implement & support risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. The Team Internal Audit As part of Digital Internal Audit team, you will be part of our USI Internal Audit practice and will be responsible for scaling digital capabilities for our IA clients. Responsibilities will include helping our clients adopt digital through various stages of their Internal Audit lifecycle, from planning till reporting. We help organizations enhance their digital footprint in the Internal Audit space by adopting a digital approach. Through digital, Internal Audit groups can not only perform its traditional duties better, but can broaden its mandate and sphere of influence to provide actionable insights to management through the anticipation of both risks and opportunities. Work you’ll do The key job responsibilities will be to: Analyze client requirements, perform gap analysis, design and develop digital solutions using data analytics tools and technologies to solve client needs Design, build, and implement custom AI and Generative AI solutions tailored to meet specific business needs Utilize advanced machine learning techniques to develop/train AI models and fine-tune Gen-AI models. Continuously monitor and improve the performance of AI models to ensure they meet the desired performance metrics Deploy solution across various cloud and on-prem computing platforms Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Conduct data analysis, preprocessing, and modeling to extract valuable insights and drive data-driven decision-making. Provide technical expertise and thought leadership in the field of AI/Generative AI to guide clients in adopting AI-driven solutions. Stay up to date with the latest advancements in AI and machine learning technologies to incorporate cutting-edge solutions into projects. Required Skills Proficiency in designing and developing generative AI models and AI algorithms utilizing state-of-the-art techniques. Experience in developing and maintaining data pipelines and Retrieval-Augmented Generation (RAG) solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning Experience with cloud services (AWS, Azure, GCP) for deploying AI models and applications. Familiarity with Gen AI application design and architecture Practical experience of evaluating Large-Language Models (LLM) performance (using common metrics) In-depth knowledge of Natural Language Processing (NLP) and its applications in AI solutions. Expertise in handling large multi-modal datasets and applying data preprocessing techniques to ensure data quality and relevance. Ability to analyze and interpret complex data to derive actionable insights and drive decision-making processes Strong knowledge of programming languages like Python and SQL Proficiency in building agent frameworks, RAG pipelines, experience in LLM Frameworks like LangChain or LlamaIndex; implementation exposure to vector databases like Pinecone, Chroma or FAISS Strong understanding of Gen AI techniques and LLMs such as GPT, Gemini, Claude, etc. and Open Source LLMs (Llama, Mistral, etc.) Expertise in Natural Language Processing (NLP), Deep Learning, LLMs & Generative AI (Gen AI) Experience with frameworks such as TensorFlow, PyTorch and Keras Familiar with AI/Gen AI Ethics & Governance frameworks, applications and archetypes Preferred Skills Certification in Cloud (AWS/ Azure / GCP) Certification in AI/ML or Data Analytics Qualification B.Tech/B.E. and/or MBA Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304565 Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Job title: AI & Generative AI - Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage ,implement & support risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and Robotic Process Automation (RPA) to solve Deloitte’s clients‘ most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Internal Audit As part of Digital Internal Audit team, you will be part of our USI Internal Audit practice and will be responsible for scaling digital capabilities for our IA clients. Responsibilities will include helping our clients adopt digital through various stages of their Internal Audit lifecycle, from planning till reporting. We help organizations enhance their digital footprint in the Internal Audit space by adopting a digital approach. Through digital, Internal Audit groups can not only perform its traditional duties better, but can broaden its mandate and sphere of influence to provide actionable insights to management through the anticipation of both risks and opportunities. Work you’ll do The key job responsibilities will be to: Analyze client requirements, perform gap analysis, design and develop digital solutions using data analytics tools and technologies to solve client needs Design, build, and implement custom AI and Generative AI solutions tailored to meet specific business needs Utilize advanced machine learning techniques to develop/train AI models and fine-tune Gen-AI models. Continuously monitor and improve the performance of AI models to ensure they meet the desired performance metrics Deploy solution across various cloud and on-prem computing platforms Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Conduct data analysis, preprocessing, and modeling to extract valuable insights and drive data-driven decision-making. Provide technical expertise and thought leadership in the field of AI/Generative AI to guide clients in adopting AI-driven solutions. Stay up to date with the latest advancements in AI and machine learning technologies to incorporate cutting-edge solutions into projects. Required Skills Proficiency in designing and developing generative AI models and AI algorithms utilizing state-of-the-art techniques. Experience in developing and maintaining data pipelines and Retrieval-Augmented Generation (RAG) solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning Experience with cloud services (AWS, Azure, GCP) for deploying AI models and applications. Familiarity with Gen AI application design and architecture Practical experience of evaluating Large-Language Models (LLM) performance (using common metrics) In-depth knowledge of Natural Language Processing (NLP) and its applications in AI solutions. Expertise in handling large multi-modal datasets and applying data preprocessing techniques to ensure data quality and relevance. Ability to analyze and interpret complex data to derive actionable insights and drive decision-making processes Strong knowledge of programming languages like Python and SQL Proficiency in building agent frameworks, RAG pipelines, experience in LLM Frameworks like LangChain or LlamaIndex; implementation exposure to vector databases like Pinecone, Chroma or FAISS Strong understanding of Gen AI techniques and LLMs such as GPT, Gemini, Claude, etc. and Open Source LLMs (Llama, Mistral, etc.) Expertise in Natural Language Processing (NLP), Deep Learning, LLMs & Generative AI (Gen AI) Experience with frameworks such as TensorFlow, PyTorch and Keras Familiar with AI/Gen AI Ethics & Governance frameworks, applications and archetypes Preferred Skills Certification in Cloud (AWS/ Azure / GCP) Certification in AI/ML or Data Analytics Qualification B.Tech/B.E. and/or MBA Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304561 Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

***Immediate requirement*** Job Title: AI Engineer with LLM Salary Range: 13 LPA -15 LPA No. of years of experience: 5+ years Job Type: Contract Contract Duration: 12 months (potential to extend) Location: Gurugram, Work Type: Hybrid Start Date: Immediate (Notice period/joining within 1-2 weeks) We are looking for an experienced AI Engineer with a strong background in designing, fine-tuning, and deploying Large Language Models (LLMs). The ideal candidate will have hands-on expertise in NLP, deep learning frameworks (like PyTorch or TensorFlow), prompt engineering, and using APIs or open-source models such as OpenAI, LLaMA, or Mistral. Experience in deploying models to production environments and optimizing for performance is key. Key Skills: LLMs, NLP, Python, Transformers, Prompt Engineering, OpenAI/GPT, Model Fine-Tuning, ML Pipelines, API Integration, MLOps. **Apply only if you can join within 1-2 weeks** Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 4 days ago

Apply

7.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description We are seeking a skilled and experienced Platform Engineer/Architect to lead the setup, advancement and maintenance of a robust on-premise environment for hosting open-source large language models. This role involves designing and implementing scalable, secure, and efficient infrastructure solutions that cater to the specific needs of large-scale AI models. How You Will Contribute And What You Will Learn Design and architect a scalable and secure on-premise hosting environment for large language models. Develop and implement infrastructure automation tools for efficient management and deployment. Ensure high availability and disaster recovery capabilities. Optimize the hosting environment for maximum performance and efficiency. Implement monitoring tools to track system performance and resource utilization. Regularly update the infrastructure to incorporate the latest technological advancements. Establish robust security protocols to protect sensitive data and model integrity. Ensure compliance with data protection regulations and industry standards. Conduct regular security audits and vulnerability assessments. Work closely with AI/ML teams to understand their requirements and provide suitable infrastructure solutions. Provide technical guidance and support to internal teams and stakeholders. Stay abreast of emerging trends in AI infrastructure and large language model hosting. Manage physical and virtual resources to ensure optimal allocation and utilization. Forecast resource needs and plan for future expansion and upgrades Key Skills And Experience Bachelor's or Master's degree in Computer Science, Information Technology, or a related field with 7-12 years of experience. Proven experience in infrastructure architecture, with exposure to AI/ML environments. Experience with inferencing frameworks like TGI, TEI, Lorax, S-Lora etc. Experience with training frameworks like PyTorch, TensorFlow etc. Proven experience with On-premises OSS models – Llama3, Mistral etc. Strong knowledge of networking, storage, and computing technologies. Experience of working with container orchestration tools (e.g., Kubernetes - Redhat OS). Proficient programming skills in Python Familiarity with open-source large language models and their hosting requirements. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team Strategy and Technology lays the path for Nokia’s future technology innovation and identifies the most promising areas for Nokia to create new value. We set the company’s strategy and technology vision, offer an unparalleled research foundation for innovation, and provide critical support infrastructure for Nokia. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

Job Description We are looking for a skilled LLM / GenAI Expert who can drive innovative AI/ML solutions, and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with deep, hands-on experience in Large Language Models (LLMs), prompt engineering, and GenAI tools and frameworks. Natural Abilities • Smart, self-motivated, responsible and out of the box thinker. • Detail oriented and powerful analyzer. • Great writing communication skills. Requirements • 5+ years of total software development experience with a strong foundation in Python. • 2–3+ years of hands-on experience working with GenAI / LLMs, including real-world implementation and deployment. • Deep familiarity with models like GPT-4, Claude, Mistral, LLaMA, etc • Strong understanding of prompt engineering, LLM fine-tuning, tokenization, and embedding-based search. • Experience with Hugging Face, LangChain, OpenAI API, and vector databases (Pinecone, FAISS, Chroma). • Exposure to agent frameworks and multi-agent orchestration. • Excellent written and verbal communication skills • Proven ability to lead and mentor team members on technical and architectural decisions. Responsibilities • Lead the design and development of GenAI/LLM-based products and solutions. • Mentor and support junior engineers in understanding and implementing GenAI/LLM techniques. • Work on fine-tuning, prompt engineering, RAG (Retrieval-Augmented Generation), and custom LLM workflows. • Integrate LLMs into production systems using Python and frameworks like LangChain, LlamaIndex, Transformers, etc • Explore and apply advanced GenAI paradigms such as MCP, agent2agent collaboration, and autonomous agents. • Research, prototype, and implement new ideas and stay current with state-of-the-art GenAI trends. • Collaborate closely with product, design, and engineering teams to align LLM use cases with business goals. Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

About Praella: We are a proud Great Place to Work certified organization. We strive for excellence, and we chase perfection for our merchants and team. We build relationships with our merchants that are not reflective of a vendor-like or even a partner-like relationship. We strive to become an extension of who our merchants are. And we strive to become a reflection of our team as an organization. We are also a Webby-winning agency. We are a Shopify Plus partner. We are grateful to be an extension of some of the best e-commerce brands. We are a merchant-first, results-driven team. We have the nothing is impossible mentality. We work together and support each other and our clients. Collaboration and camaraderie are everything. We are data-driven, ambitious, and creative - we work hard, and we work smart. - Our founders started one of the first Shopify Plus agencies, which was eventually sold. - We are Shopify Plus Partners and partner with other e-commerce leaders like ReCharge, Klaviyo, Omnisend, Yotpo, Smile, etc. - We have a remote team, but our headquarters is in Chicago. We have a small team in Chicago. Outside of Chicago, we have teams located in Atlanta, Los Angeles, Phoenix, New York, Toronto, Athens (Greece), Sarajevo (Bosnia), and Surat (India). - Do you want to work from Europe or India for a month and travel to nearby destinations on long weekends? Why not? - Majority of our clients are e-commerce-based merchants with annual revenue between $2M-$350MM. We are ambitious. And, we want you to be too. We need people that want to be pushed and who want to be challenged. We want people who will push us and who will challenge us. Is that you? Our Website : http://praella.com/ Job Description of AI Engineer Praella is looking for an experienced AI Engineer for which the required details are mentioned below. Objectives of this Role: As an AI Engineer (Model Development & Deployment), you will be the core driver in transforming product and business concepts into impactful, data-driven AI applications. Your role will bridge the gap between innovative ideas and tangible AI solutions, encompassing the entire lifecycle from model development to real-world deployment. You will be instrumental in ensuring that AI initiatives deliver significant value and are seamlessly integrated into our products. About the Role: Lead AI Development: Translate product and business ideas into well-defined AI/ML problem statements and robust model architectures. Innovate and Build: Develop custom AI/ML models from scratch and fine-tune existing architectures (LLMs, transformers, CNNs, etc.) to address specific challenges. Data Mastery: Collect, clean, analyze, and structure diverse data sets for effective model training and evaluation. Deploy and Optimize: Build scalable AI services/APIs and deploy optimized models into production environments, ensuring performance and reliability. Communicate and Educate: Collaborate with stakeholders and product teams to explain AI concepts, potential, and integration strategies. Drive AI Innovation: Stay at the forefront of AI advancements, contributing to brainstorming sessions and fostering an AI-first approach in product development. What you can bring to the table: You are passionate about AI and eager to apply your expertise to create real-world impact. While comprehensive knowledge is valuable, a strong learning aptitude will ensure rapid growth within our dynamic environment. Skills: Strong programming skills in Python with proficiency in TensorFlow, PyTorch, or similar frameworks. Solid understanding of various AI/ML model architectures (LLMs, NLP, CV, classification/regression). Experience with end-to-end AI system development, from data ingestion to model serving. Familiarity with tools like Docker, FastAPI, Hugging Face, LangChain, or related ecosystems. Strong data analysis and problem-solving abilities. Excellent communication skills, capable of explaining complex technical concepts to non-technical audiences. Ability to work effectively in a collaborative, fast-paced environment. Strong organizational skills and attention to detail. Strong analytics skills with focus on data mining, dashboard outlining. Ability to manage multiple tasks and meet deadlines. Proficient in project coordination and reporting. Excellent written and verbal communication skills in English. Strong time management and multitasking skills. Work Experience : 3–5 years of hands-on experience in AI/ML model development, training, and deployment. Qualification : Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Location: Surat, Gujarat Nice to Have: Experience with open-source LLMs like Mistral, LLaMA, Falcon, and fine-tuning techniques (LoRA/QLoRA). Prior experience with Chatbot applications, Recommender Systems, or Predictive Analytics. Familiarity with cloud platforms (AWS/GCP/Azure) for AI model deployment. Life At Praella Private Limited Benefits and Perks 5 days working Fully Paid Basic Life/ Competitive salary Vibrant Workplace PTO/Paid Offs/Annual Paid Leaves/Paternal Leaves Fully Paid Health Insurance. Quarterly Incentives Rewards & Recognitions Team Outings Our Cultural Attributes Growth mindset People come first Customer obsessed Diverse & inclusive Exceptional quality Push the envelope Learn and grow Equal opportunity to grow. Ownership Transparency Team Work. Together, we can…!!!! Show more Show less

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies