Jobs
Interviews

658 Mistral Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

load_list_page(event)"> Job listing Job details Job Information Date Opened 06/16/2025 Industry IT Services Job Type Full time City Pune City State/Province Maharashtra Country India Zip/Postal Code 411001 About Us CCTech 's mission is to transform human life by the democratization of technology. We are a well established digital transformation company building the applications in the areas of CAD, CFD, Artificial Intelligence, Machine Learning, 3D Webapps, Augmented Reality, Digital Twin, and other enterprise applications. We have two business divisions: product and consulting. simulationHub is our flagship product and the manifestation of our vision. Currently, thousands of users use our CFD app in their upfront design process. Our consulting division, with its partners such as Autodesk Forge, AWS and Azure, is helping the world's leading engineering organizations, many of which are Fortune 500 list of companies, in achieving digital supremacy. Job Description We are seeking a passionate and skilled AI Engineer with over 2 years of hands-on experience to join our growing team. The ideal candidate will have an engineering background and a strong grasp of modern AI technologies, especially in Prompt Engineering, Agentic AI models, and production-grade AI workflows . You’ll play a key role in building intelligent systems that augment and automate real-world business processes. Responsibility Design, develop, and deploy AI-powered solutions using LLMs and agentic frameworks. Build and optimize prompt engineering strategies to ensure high-performance language model behavior. Create and maintain autonomous AI agents capable of executing complex multi-step task. Develop, test, and iterate on real-world AI workflows integrated into broader applications. Collaborate with product managers, designers, and engineers to translate business problems into scalable AI solutions. Monitor and fine-tune AI models in production for accuracy, performance, and cost-effectiveness. Stay current with emerging trends in generative AI, LLMs, agent-based architectures, and MLOps. Requirements 2+ years of hands-on experience in AI/ML engineering or applied NLP. Proven experience with Prompt Engineering and customizing large language model behavior. Experience developing or integrating Agentic AI frameworks (e.g., LangChain, AutoGPT, CrewAI, etc.). Strong understanding of LLMs (e.g., GPT-4, Claude, Mistral, Gemini, etc.) and how to apply them in workflow automation. Demonstrated ability to deploy working AI solutions and pipelines in production environments. Proficient in Python and relevant AI libraries (Transformers, OpenAI SDK, LangChain, etc.). Familiarity with RESTful APIs, cloud platforms (e.g., Azure, AWS, GCP), and version control tools (e.g., Git) Benefits Opportunity to work with a dynamic and fast-paced IT organization. Make a real impact on the company's success by shaping a positive and engaging work culture. Work with a talented and collaborative team. Be part of a company that is passionate about making a difference through technology. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2185D0;border-color:#2185D0;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Bison Global Search is seeking a Principal AI Engineer for a leading product company in Chennai . They work on some cutting-edge technologies in the BIOS Industry Please find below details about the role : Location: Chennai (please do not apply if you are not willing to relocate to Chennai) Company: Product Company (Leader in BIOS Products) Designation: Principal AI Engineer Skills Required: Python + RAG ( Retrieval-Augmented Generation) + Agentic AI Experience : 8 + years of experience as an AI engineer + Guiding and mentor a team of 3-5 AI engineers Please find below the complete JD. if this interests you, please apply to this email We are looking for a highly skilled Principal AI Engineer with deep expertise in Retrieval-Augmented Generation (RAG) and Agentic AI to lead our AI initiatives and drive innovation in FW and Data Center AI solutions . This role requires a strategic thinker who can design and deploy scalable AI architectures, integrate LLMs with retrieval-based techniques, and develop intelligent agentic systems that autonomously interact with data, APIs, and workflows. This role will lead the design and deployment of cutting-edge AI-driven solutions, focusing on LLMs for code synthesis, automated testing, and intelligent autonomous agents that enhance software development workflows with strong technical expertise, strategic vision, and leadership to build and deploy AI-driven products that align with business goals. Key Responsibilities: AI Strategy and Leadership: Define and execute AI strategies focused on RAG-based retrieval, code generation, and AI-assisted software engineering Work with stakeholders to align AI capabilities with business objectives and software development needs Research and integrate cutting-edge LLMs and autonomous AI agent architecture into development processes. RAG & Agentic AI Development: Develop RAG pipelines that enhance AI‘s ability to retrieve relevant knowledge and generate context-aware responses. Build and optimize agentic AI systems that can interact with APIs, databases, and development environments (such as LangChain, OpenAI APIs, etc.) Implement AI-powered search, chatbots, and decision-support tools for software engineers. Fine-tune LLMs (GPT, Llama, Mistral, Claude, Gemini etc.) for domain-specific applications. Optimize retrieval mechanisms to enhance response accuracy, grounding AI outputs in real-world data Code generation & Test case Automation: Leverage LLMs to generate high-quality, production-ready code Develop AI-driven test case generation tools that automatically create and validate unit tests, integration tests, and regression tests Integrate AI-driven code assistants and programming agents into IDE and CI/CD workflows Optimize prompt engineering and fine-tuning strategies for LLMs to improve code quality and efficiency MLOps & Scalable AI Systems: Architect and deploy scalable AI models and retrieval pipelines using cloud-based MLOps pipelines (AWS/GCP/Azure, Docker, Kubernetes) Optimize LLMs for real-time AI inferencing , ensuring low latency and high-performance AI solutions. Collaboration: Work cross-functionally with product teams, software engineers, and business stakeholders to integrate AI solutions into products. Mentorship: Guide and mentor a team of 3-5 AI engineers in LLM fine-tuning, retrieval augmentation, and autonomous AI agents. Establish best practices for AI-assisted software development, secure AI integration, and bias mitigation. Research & Innovation: Commitment to staying updated with the latest AI and machine learning research and advancements . Ability to think creatively and propose innovative solutions to complex problems. Model Development: Ability to design, train, and evaluate various AI models , including LLMs and standalone models —familiarity with model training tools and frameworks like Hugging Face Trainer, Fairseq, etc . Required Qualifications: Education: Master's or Ph.D. in Computer Science, AI, Machine Learning, or a related field. Experience: 8+ years of experience in AI and machine learning, with at least 2 years of experience working on LLMs, code generation, RAG, or AI-powered automation . Technical skills: Proficiency in Python, Tensorflow, PyTorch, and LangChain Experience with LLM fine-tuning for code generation Strong expertise in vector databases (FAISS, Weaviate, Chroma, Pinecone, Milvus) and retrieval models Hands-on experience with AI-powered code assistants (Copilot, code Llama, Codex, GTP-4) Knowledge of automated software testing, AI-driven test case generation, AI-assisted debugging Experience with multi-agent AI systems (LangGraph, CrewAI, AutgoGen, OpenAI Assistants API) for autonomous coding tasks Knowledge of GoLang for building high-performance and scalable components and unit test case generation using CMocka is a plus. Hands-on model development, working with business stakeholders to define KPIs and develop and deliver multi-modal (Text and Images) and ensemble models. Develop novel approaches to solve firmware lifecycle management code generation and customer support issues. Implement advanced natural language processing and computer vision models to extract insights from diverse data sources , user-generated data, and images. Automate model lifecycle management . Stay updated with AI and machine learning technology advancements to drive Firmware Lifecycle Management. Analytical & Problem-Solving: Analytical Thinking: Strong analytical skills to interpret complex data and derive actionable insights. Problem-Solving: Ability to troubleshoot and resolve technical issues related to AI models and systems. Research & Innovation: Continuous Learning: Commitment to staying updated with the latest research and advancements in AI and machine learning. Innovation: Ability to think creatively and propose innovative solutions to complex problems. Soft Skills: Communication: Excellent verbal and written communication skills. Adaptability: Ability to adapt to changing technologies and project requirements. Team Player: Strong interpersonal skills and the ability to work well in a team environment. Preferred Qualifications: Experience with deploying and maintaining AI models in production environments . Familiarity with RAG-specific techniques like knowledge distillation or multi-hop retrieval . Understanding of reinforcement learning and active learning techniques for model improvement . Previous experience with large-scale NLP systems and AI-powered search engines . Contribution to AI research, patents, or open-source development Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: AI Engineer Want to join a startup, but with the stability of a larger organization? Join our innovation team at HGS that's focused on building SaaS products. If you are highly driven & passionate person who'd like to build highly scalable SaaS products in a startup type of environment, you're welcome to apply. The HGS Digital Innovation Team is designed to create products and solutions relevant for enterprises, discover innovations and to contextualize and experiment with them within a specific industry. This unit provides an environment for the exploration, development, and testing of Cloud-based Digital AI solutions. In addition to that it also looks at rapid deployment at scale and sustainability of these solutions for target business impacts. Job Overview We are seeking an agile AI Engineer with a strong focus on both AI engineering and SaaS product development in a 0-1 product environment. This role is perfect for a candidate skilled in building and iterating quickly, embracing a fail fast approach to bring innovative AI solutions to market rapidly. You will be responsible for designing, developing, and deploying SaaS products using advanced Large Language Models (LLMs) such as Meta, Azure OpenAI, Claude, and Mistral, while ensuring secure, scalable, and high-performance architecture. Your ability to adapt, iterate, and deliver in fast-paced environments is critical. Responsibilities Lead the design, development, and deployment of SaaS products leveraging LLMs, including platforms like Meta, Azure OpenAI, Claude, and Mistral. Support product lifecycle, from conceptualization to deployment, ensuring seamless integration of AI models with business requirements and user needs. Build secure, scalable, and efficient SaaS products that embody robust data management and comply with security and governance standards. Collaborate closely with product management, and other stakeholders to align AI-driven SaaS solutions with business strategies and customer expectations. Fine-tune AI models using custom instructions to tailor them to specific use cases and optimize performance through techniques like quantization and model tuning. Architect AI deployment strategies using cloud-agnostic platforms (AWS, Azure, Google Cloud), ensuring cost optimization while maintaining performance and scalability. Apply retrieval-augmented generation (RAG) techniques to build AI models that provide contextually accurate and relevant outputs. Build the integration of APIs and third-party services into the SaaS ecosystem, ensuring robust and flexible product architecture. Monitor product performance post-launch, iterating and improving models and infrastructure to enhance user experience and scalability. Stay current with AI advancements, SaaS development trends, and cloud technology to apply innovative solutions in product development. Qualifications Bachelor’s degree or equivalent in Information Systems, Computer Science, or related fields. 6+ years of experience in product development, with at least 2 years focused on AI-based SaaS products. Demonstrated experience in leading the development of SaaS products, from ideation to deployment, with a focus on AI-driven features. Hands-on experience with LLMs (Meta, Azure OpenAI, Claude, Mistral) and SaaS platforms. Proven ability to build secure, scalable, and compliant SaaS solutions, integrating AI with cloud-based services (AWS, Azure, Google Cloud). Strong experience with RAG model techniques and fine-tuning AI models for business-specific needs. Proficiency in AI engineering, including machine learning algorithms, deep learning architectures (e.g., CNNs, RNNs, Transformers), and integrating models into SaaS environments. Solid understanding of SaaS product lifecycle management, including customer-focused design, product-market fit, and post-launch optimization. Excellent communication and collaboration skills, with the ability to work cross-functionally and drive SaaS product success. Knowledge of cost-optimized AI deployment and cloud infrastructure, focusing on scalability and performance. Show more Show less

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra

On-site

Job Information Date Opened 06/16/2025 Industry IT Services Job Type Full time City Pune City State/Province Maharashtra Country India Zip/Postal Code 411001 About Us CCTech 's mission is to transform human life by the democratization of technology. We are a well established digital transformation company building the applications in the areas of CAD, CFD, Artificial Intelligence, Machine Learning, 3D Webapps, Augmented Reality, Digital Twin, and other enterprise applications. We have two business divisions: product and consulting. simulationHub is our flagship product and the manifestation of our vision. Currently, thousands of users use our CFD app in their upfront design process. Our consulting division, with its partners such as Autodesk Forge, AWS and Azure, is helping the world's leading engineering organizations, many of which are Fortune 500 list of companies, in achieving digital supremacy. Job Description We are seeking a passionate and skilled AI Engineer with over 2 years of hands-on experience to join our growing team. The ideal candidate will have an engineering background and a strong grasp of modern AI technologies, especially in Prompt Engineering, Agentic AI models, and production-grade AI workflows . You’ll play a key role in building intelligent systems that augment and automate real-world business processes. Responsibility : Design, develop, and deploy AI-powered solutions using LLMs and agentic frameworks. Build and optimize prompt engineering strategies to ensure high-performance language model behavior. Create and maintain autonomous AI agents capable of executing complex multi-step task. Develop, test, and iterate on real-world AI workflows integrated into broader applications. Collaborate with product managers, designers, and engineers to translate business problems into scalable AI solutions. Monitor and fine-tune AI models in production for accuracy, performance, and cost-effectiveness. Stay current with emerging trends in generative AI, LLMs, agent-based architectures, and MLOps. Requirements 2+ years of hands-on experience in AI/ML engineering or applied NLP. Proven experience with Prompt Engineering and customizing large language model behavior. Experience developing or integrating Agentic AI frameworks (e.g., LangChain, AutoGPT, CrewAI, etc.). Strong understanding of LLMs (e.g., GPT-4, Claude, Mistral, Gemini, etc.) and how to apply them in workflow automation. Demonstrated ability to deploy working AI solutions and pipelines in production environments. Proficient in Python and relevant AI libraries (Transformers, OpenAI SDK, LangChain, etc.). Familiarity with RESTful APIs, cloud platforms (e.g., Azure, AWS, GCP), and version control tools (e.g., Git) Benefits Opportunity to work with a dynamic and fast-paced IT organization. Make a real impact on the company's success by shaping a positive and engaging work culture. Work with a talented and collaborative team. Be part of a company that is passionate about making a difference through technology.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Job Title: Senior Software Engineer – AI (Prompt Engineering & Full Stack) Location: Remote / United States / India Employment Type: Full-Time Product Stage: (Stealth Mode) AI Product Platform for Healthcare About Us: We’re a growing healthtech startup on a mission to disrupt the $500B+ US Healthcare Operations & Management space using cutting-edge AI tools. Our goal is to build a modern, intelligent platform that eliminates administrative waste and improves healthcare operations. We’re backed by the industry's best investors and SMEs, and we're building an elite team of engineers, designers, and domain experts. We are on a mission to help Clinicians and patients so that their administrative burden is simplified and the patients have the best outcomes. The Role: We’re looking for an experienced Software Engineer – AI who thrives in ambiguity, moves fast, and is passionate about applying AI (& concepts...) to solve real-world problems. You’ll play a foundational role in shaping both the product and the engineering culture. This role blends Prompt Engineering, LLM application design, and Full Stack Development to help build a next-generation healthcare intelligence engine to solve real-life problems for hospitals and health systems. What You’ll Do: Design and develop prompt architectures for LLM-based workflows Engineer and fine-tune prompt chains for optimal performance, accuracy, and reliability Collaborate with domain experts to understand the nuances of healthcare and translate them into intelligent AI interactions Develop and deploy secure, scalable full-stack features – from front-end UIs to back-end services and APIs Integrate AI capabilities into product workflows using tools like LangChain, OpenAI APIs, or open-source LLMs Work closely with the founding team to iterate quickly and bring the product to real-life operations Skills & Experience: Required: 5–7 years of experience in software engineering, ideally in AI or health-tech startups Strong grasp of Prompt Engineering for LLMs (e.g., GPT-4, Claude, Mistral, etc.) Experience with Full Stack Development using modern frameworks (e.g., React, Next.js, Node.js, Python, Flask, or FastAPI) Familiarity with AI tooling (LangChain, Pinecone, vector databases, etc.) Understanding of HIPAA and secure data handling practices Experience shipping production-grade code in a fast-paced, agile environment Good to have (+plus points) Knowledge of US Healthcare operations (e.g., CPT, ICD-10 codes, SNOMED, EDI 837/835, payer, provider workflows) Experience with healthcare data formats (HL7, FHIR, EDI) Experience with DevOps and cloud deployment (AWS/GCP/Azure) Why Join Us: Work on a mission-critical, AI-first product that impacts real healthcare outcomes Join at ground zero and shape the technical and product direction Competitive compensation, meaningful equity, and flexible work environment Build with the latest in AI – and solve problems the world hasn’t cracked yet If you’re excited to use AI to fix the broken healthcare machinery, contact us ASAP! Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Generative AI Intern – Workflow Automation (Remote) Company: SEO Scientist AI Industry: AI-Powered SEO & Marketing Automation Location: Remote Internship Duration: 3 to 6 Months Stipend: Yes (Based on Skills) About Us: At SEO Scientist AI, we are building autonomous agents that automate SEO and marketing workflows using large language models and real-time tools. Our goal is to replace repetitive manual tasks in SEO with intelligent systems. We're competing with platforms like AirOps, AgenticFlow, Mazzal, and Relevance—and we’re moving fast. What You’ll Work On: Assist in building and testing LLM-based agents for automating SEO and marketing tasks Support integration of tools like Notion, Google Sheets, and Webflow with AI workflows Use platforms like OpenAI, LangChain, and AWS Lambda for building internal prototypes Help write prompt chains, test workflows, and track output accuracy Work closely with product and engineering teams to improve automation pipelines #What We’re Looking For: Hands-on experience or academic projects in Generative AI or LLMs (OpenAI, Claude, Mistral, etc.) Familiarity with Python and basic cloud concepts (AWS preferred) Interest or experience in automating tasks using tools like Zapier, Make.com, or custom scripts Bonus: Exposure to LangChain, LlamaIndex, Pinecone, or similar agent frameworks A curious mind and eagerness to learn how AI is reshaping SEO and content marketing --- Perks: -Remote-first internship with flexible work hours -Learn from engineers, marketers, and AI experts working on real-world agentic use cases -Get hands-on experience with cutting-edge tools in the Generative AI space -Letter of recommendation and priority for full-time roles Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

On-site

Job Description We are looking for a skilled LLM / GenAI Expert who can drive innovative AI/ML solutions, and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with deep, hands-on experience in Large Language Models (LLMs), prompt engineering, and GenAI tools and frameworks. Natural Abilities. Smart, self-motivated, responsible and out of the box thinker. Detail oriented and powerful analyzer. Great writing communication skills. Requirements 5+ years of total software development experience with a strong foundation in Python. 2-3+ years of hands-on experience working with GenAI / LLMs, including real-world implementation and deployment. Deep familiarity with models like GPT-4, Claude, Mistral, LLaMA, etc. Strong understanding of prompt engineering, LLM fine-tuning, tokenization, and embedding-based search. Experience with Hugging Face, LangChain, OpenAI API, and vector databases (Pinecone, FAISS, Chroma). Exposure to agent frameworks and multi-agent orchestration. Excellent written and verbal communication skills. Proven ability to lead and mentor team members on technical and architectural decisions. Responsibilities Lead the design and development of GenAI/LLM-based products and solutions. Mentor and support junior engineers in understanding and implementing GenAI/LLM techniques. Work on fine-tuning, prompt engineering, RAG (Retrieval-Augmented Generation), and custom LLM workflows. Integrate LLMs into production systems using Python and frameworks like LangChain, LlamaIndex, Transformers, etc. Explore and apply advanced GenAI paradigms such as MCP, agent2agent collaboration, and autonomous agents. Research, prototype, and implement new ideas and stay current with state-of-the-art GenAI trends. Collaborate closely with product, design, and engineering teams to align LLM use cases with business goals. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY- Assurance – Sr.Manager - Digital Position Details As part of EY GDS Assurance Digital, you will be responsible for implementing innovative ideas through AI research to develop high growth & impactful products. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. You will work with multi-disciplinary teams across the entire region to support global clients. This is a combination of a technical role and a business development role in AI, responsible for creating innovative solutions by applying AI based techniques for business problems. As our in-house senior AI engineer, your expertise and skills will be vital in our ability to steer one of our Innovation agenda. Responsibilities 10-12 years of experience in Data Science, with about 15 years of total experience Talk to businesses across Assurance teams and understand the problem and develop the solution based on problem, infra and cost limitations Respond to AI related RFPs and business cases Ensure team’s utilization and billability – i.e ensure that the projects are lined up for developers on after the other Convert business problem into analytical problem and devise a solution approach Clean, aggregate, analyze and interpret the data to derive business insights from it. Own the AI/ML implementation process: Model Design, Feature Planning, Testing, Production Setup, Monitoring, and release management. Work closely with the Solution Architects in deployment of the AI POC’s and scaling up to production level applications. Should have solid background in Python and has deployed on open-source models- Work on data extraction techniques from complex PDF/Word Doc/Forms- entities extraction, table extraction, information comparison. Key Requirements/Skills & Qualification: Excellent academic background, including at a minimum a bachelor or a master’s degree in data science, Business Analytics, Statistics, Engineering, Operational Research, or other related field with strong focus on modern data architectures, processes, and environments. Solid background in Python with excellent coding skills. 6+ years of core data science experience in one or more below areas: Machine Learning (Regression, Classification, Decision Trees, Random Forests, Timeseries Forecasting and Clustering) Understanding and usage of Large Language Models like Open AI models like ChatGPT, GPT4, function calling, frameworks like LangChain, Llama Index, agents etc. Retriever augmented generation and prompt engineering. Good understanding of open source LLM framework like Mistral, Llama, etc. and fine tuning on custom datasets. Deep Learning (DNN, RNN, LSTM, Encoder-Decoder Models) Natural Language Processing- Text Summarization, Aspect Mining, Question Answering, Text Classification, NER, Language Translation, NLG, Sentiment Analysis. Computer Vision- Image Classification, Object Detection, Tracking etc. SQL/NoSQL Databases and its manipulation components Working knowledge of API Deployment (Flask/FastAPI/Azure Function Apps) and webapps creation, Docker, Kubernetes. Additional skills requirements: Excellent written, oral, presentation and facilitation skills Ability to coordinate multiple projects and initiatives simultaneously through effective prioritization, organization, flexibility, and self-discipline. Must have demonstrated project management experience. Knowledge of firm’s reporting tools and processes. Proactive, organized, and self-sufficient with ability to priorities and multitask. Analyses complex or unusual problems and can deliver insightful and pragmatic solutions. Ability to quickly and easily create/ gather/ analyze data from a variety of sources. A robust and resilient disposition able to encourage discipline in team behaviors What We Look For A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Job Summary: We are looking for a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities: Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc. for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up to date with the latest advancements in LLM architectures, frameworks, and AI trends . Requirements Required Skills & Qualifications: 3-5 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, Hugging Face Transformers . Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph . Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good to Have: Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.) . Knowledge of Knowledge Graphs and Symbolic AI . Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques . Research experience or published work in LLMs, NLP, or Generative AI is a plus. Why Join Us? Opportunity to work on cutting-edge LLM and Generative AI projects . Collaborative and innovative work environment. Competitive salary and benefits. Career growth opportunities in AI and ML research and development. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Delhi, India

Remote

About Us Astra is a cybersecurity SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind AI-led offensive Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 13,000+ security tests. CTOs and CISOs love Astra because it helps them to achieve continuous security at scale, fix vulnerabilities in record time, and seamlessly transition from DevOps to DevSecOps with Astra's powerful CI/CD integrations. Astra is loved by 800+ companies across 70+ countries. In 2024 Astra uncovered 2.5 million+ vulnerabilities for its customers, saving customers $110M+ in potential losses due to security vulnerabilities. We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, Dream 11, OLX Autos etc. are a few of Astra’s customers. Job Description This is a remote position. Role Overview As Astra Security’s first AI Engineer, you will play a pivotal role in introducing and embedding AI into our security products. You will be responsible for designing, developing, and deploying AI applications leveraging both open-source models (Llama, Mistral, DeepSeek etc) and proprietary services (OpenAI, Anthropic). Your work will directly impact how AI is used to enhance threat detection, automate security processes, and improve intelligence gathering. This is an opportunity to not only build future AI models but also define Astra Security’s AI strategy, laying the foundation for future AI-driven security solutions. Key Responsibilities Lead the AI integration efforts within Astra Security, shaping the company’s AI roadmap Develop and Optimize Retrieval-Augmented Generation (RAG) Pipelines with multi-tenant capabilities Build and enhance RAG applications using LangChain, LangGraph, and vector databases (e.g. Milvus, Pinecone, pgvector). Implement efficient document chunking, retrieval, and ranking strategies. Optimize LLM interactions using embeddings, prompt engineering, and memory mechanisms. Work with Graph databases (Neo4j or similar) for structuring and querying knowledge bases esign multi-agent workflows using orchestration platforms like LangGraph or other emerging agent frameworks for AI-driven decision-making and reasoning. Integrate vector search, APIs and external knowledge sources into agent workflows. Exposure to end-to-end AI ecosystem like Huggingface to accelerate AI development (while initial work won’t involve extensive model training, the candidate should be ready for fine-tuning, domain adaptation, and LLM deployment when needed) Design and develop AI applications using LLMs (Llama, Mistral, OpenAI, Anthropic, etc.) Build APIs and microservices to integrate AI models into backend architectures.. Collaborate with the product and engineering teams to integrate AI into Astra Security’s core offerings Stay up to date with the latest advancements in AI and security, ensuring Astra remains at the cutting edge What We Are Looking For Exceptional Python skills for AI/ML development Hands-on experience with LLMs and AI frameworks (LangChain, Transformers, RAG-based applications) Strong understanding of retrieval-augmented generation (RAG) and knowledge graphs Experience with AI orchestration tools (LangChain, LangGraph) Familiarity with graph databases (Neo4j or similar) Experience in Ollama for efficient AI model deployment for production workloads is a plus Experience deploying AI models using Docker Hands-on experience with Ollama setup and loading DeepSeek/Llama. Strong problem-solving skills and a self-starter mindset—you will be building AI at Astra from the ground up. Nice To Have Experience with AI deployment frameworks (e.g., BentoML, FastAPI, Flask, AWS) Background in cybersecurity or security-focused AI applications What We Offer Software Engineering Mindset: This role requires a strong software engineering mindset to build AI solutions from 0 to 1 and scale them based on business needs. The candidate should be comfortable designing, developing, testing, and deploying production-ready AI systems while ensuring maintainability, performance, and scalability. Why Join Astra Security? Own and drive the AI strategy at Astra Security from day one Fully remote, agile working environment. Good engineering culture with full ownership in design, development, release lifecycle. A wholesome opportunity where you get to build things from scratch, improve and ship code to production in hours, not weeks. Holistic understanding of SaaS and enterprise security business. Annual trips to beaches or mountains (last one was at Wayanad). Open and supportive culture. Health insurance & other benefits for you and your spouse. Maternity benefits included. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Job Title : GenAI / ML Engineer Function : Research & Development Location : Delhi/Bangalore (3 days in office) About the Company: Elucidata is a TechBio Company headquartered in San Francisco. Our mission is to make life sciences data AI-ready. Elucidata's Elucidata’s LLM-powered platform Polly, helps research teams wrangle, store, manage and analyze large volumes of biomedical data. We are at the forefront of driving GenAI in life sciences R&D across leading BioPharma companies like Pfizer, Janssen, NextGen Jane and many more. We were recognised as the 'Most Innovative Biotech Company, 2024', by Fast Company. We are a 120+ multi-disciplinary team of experts based across the US and India. In September 2022, we raised $16 million in our Series A round led by Eight Roads, F-Prime, and our existing investors Hyperplane and IvyCap. About the Role: We are looking for a GenAI / ML Engineer to join our R&D team and work on cutting-edge applications of LLMs in biomedical data processing . In this role, you'll help build and scale intelligent systems that can extract, summarize, and reason over biomedical knowledge from large bodies of unstructured text, including scientific publications, EHR/EMR reports, and more. You’ll work closely with data scientists, biomedical domain experts, and product managers to design and implement reliable GenAI-powered workflows — from rapid prototypes to production-ready solutions. This is a highly strategic role as we continue to invest in agentic AI systems and LLM-native infrastructure to power the next generation of biomedical applications. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI. Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral) for biomedical applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations, and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, bioinformaticians, product teams, and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications : 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases. Hands-on experience with LLM frameworks and tooling (e.g., LangChain, HuggingFace, OpenAI APIs, Transformers). Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices. Strong grasp of ML/DL fundamentals and experience with tools like PyTorch, or TensorFlow. Ability to communicate ideas clearly, iterate quickly, and thrive in a fast-paced, product-driven environment. Good to Have (Preferred but Not Mandatory) Experience working with biomedical or clinical text (e.g., PubMed, EHRs, trial data). Exposure to building autonomous agents using CrewAI or LangGraph. Understanding of knowledge graph construction and integration with LLMs. Experience with evaluation challenges unique to GenAI workflows (e.g., hallucination detection, grounding, traceability). Experience with fine-tuning, LoRA, PEFT, or using embeddings and vector stores for retrieval. Working knowledge of cloud platforms (AWS/GCP) and MLOps tools (MLflow, Airflow etc.). Contributions to open-source LLM or NLP tooling We are proud to be an equal-opportunity workplace and are an affirmative action employer. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Our Company Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We create intelligent, user-first digital products that redefine industries through the power of AI and engineering excellence. Role Overview We are looking for a Senior Software Engineer-AI with 4-6 years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment—fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. What we are looking from an ideal candidate? Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems—autonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-end—from research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Preferred Skills What skills do you need? 3–6 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment—self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

We are looking for a skilled LLM / GenAI Expert who can drive innovative AI/ML solutions, and spearhead the development of advanced GenAI-powered applications. The ideal candidate will be a strong Python programmer with deep, hands-on experience in Large Language Models (LLMs), prompt engineering, and GenAI tools and frameworks. This role requires not only technical proficiency but also excellent communication skills and the ability to guide and upskill junior team members. Exposure to cutting-edge concepts like multi-agent collaboration, Memory-Context-Planning (MCP), and agent-to-agent workflows is highly desirable. Job Responsibilities: Lead the design and development of GenAI/LLM-based products and solutions. Mentor and support junior engineers in understanding and implementing GenAI/LLM techniques. Work on fine-tuning, prompt engineering, RAG (Retrieval-Augmented Generation), and custom LLM workflows. Integrate LLMs into production systems using Python and frameworks like LangChain, LlamaIndex, Transformers, etc. Explore and apply advanced GenAI paradigms such as MCP, agent2agent collaboration, and autonomous agents. Research, prototype, and implement new ideas and stay current with state-of-the-art GenAI trends. Collaborate closely with product, design, and engineering teams to align LLM use cases with business goals. Requirements: 4+ years of total software development experience with a strong foundation in Python. 2–3+ years of hands-on experience working with GenAI / LLMs, including real-world implementation and deployment. Deep familiarity with models like GPT-4, Claude, Mistral, LLaMA, etc. Strong understanding of prompt engineering, LLM fine-tuning, tokenization, and embedding-based search. Experience with Hugging Face, LangChain, OpenAI API, and vector databases (Pinecone, FAISS, Chroma). Exposure to agent frameworks and multi-agent orchestration. Excellent written and verbal communication skills. Proven ability to lead and mentor team members on technical and architectural decisions. Experience with cloud platforms (AWS/GCP/Azure) for deploying LLMs at scale. Knowledge of ethical AI practices, bias mitigation, and model interpretability. Background in machine learning, natural language processing (NLP), or AI research. Publications or contributions to open-source LLM projects are a plus. Benefits: Competitive Compensation and Benefits Half Yearly Appraisals Friendly Environment Work-life Balance 5 days working Flexible office timings Employee-friendly leave policies Work from Home (with prior approvals) Show more Show less

Posted 1 month ago

Apply

3.0 - 4.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Job Title: AI Research Engineer – Private LLM & Cognitive Systems Location: Delhi Type: Full-Time Experience Level: Senior / Expert Start Date: Immediate About Brainwave Science: Brainwave Science is a leader in cognitive technologies, specializing in solutions for the security and intelligence sectors. Our flagship product, iCognative™ , leverages real-time cognitive response analysis using Artificial Intelligence and Machine Learning techniques to redefine the future of investigations, defense, counterterrorism, and counterintelligence operations. Beyond security, Brainwave Science is at the forefront of healthcare innovation, applying our cutting-edge technology to identifying diseases, various neurological conditions, and mental health challenges in advance and identification of stress and anxiety in real time and providing non-medical, science-backed interventions. Together, we are shaping a future where advanced technology strengthens security, promotes wellness, and creates a healthier, safer world for individuals and communities worldwide. About the Role We are seeking an experienced and forward-thinking AI/ML Engineer – LLM & Deep Learning Expert to design, develop, and deploy Large Language Models (LLMs) and intelligent AI systems. You will work on cutting-edge projects at the intersection of natural language processing , edge AI , and biosignal intelligence , helping drive innovation across defense, security, and healthcare use cases. This role is ideal for someone who thrives in experimental environments, understands private and local LLM deployments, and is passionate about solving real-world challenges using advanced AI. Responsibilities Design, train, fine-tune, and deploy Large Language Models using frameworks like PyTorch , TensorFlow , or Hugging Face Transformers Integrate LLMs for local/edge deployment using tools like Ollama , LangChain , LM Studio , or llama.cpp Build NLP applications for intelligent automation , investigative analytics , and biometric interpretation Optimize models for low-latency , token efficiency , and on-device performance Work on prompt engineering , embedding tuning , and vector search integration (FAISS, Qdrant, Weaviate) Collaborate with technical and research teams to deliver scalable AI-driven features Stay current with developments in open-source and closed-source LLM ecosystems (e.g., Meta, OpenAI, Mistral) Must-Have Requirements B.Tech/M.Tech in Computer Science (CSE) , Electronics & Communication (ECE) , or Electrical & Electronics (EEE) from IIT, NIT, or BITS Minimum 3 - 4 years of hands-on experience in AI/ML, deep learning , and LLM development Deep experience with Transformer models (e.g., GPT, LLaMA, Mistral, Falcon, Claude) Hands-on with tools like LangChain , Hugging Face , Ollama , Docker , or Kubernetes Proficiency in Python , and strong knowledge of Linux environments Strong understanding of NLP , attention mechanisms , and model fine-tuning Preferred Qualifications Experience with biosignals , especially EEG or time-series data Experience deploying custom-trained LLMs on proprietary datasets Familiarity with RAG pipelines and multi-modal models (e.g., CLIP, LLaVA) Knowledge of cloud platforms (AWS, GCP, Azure) for scalable model training and serving Published research, patents, or open-source contributions in AI/ML communities Excellent communication, analytical, and problem-solving skills What We Offer Competitive compensation based on experience Flexible working hours and remote-friendly culture Access to high-performance compute infrastructure Opportunities to work on groundbreaking AI projects in healthcare, security, and defense Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/ B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary Job title: AI & Generative AI - Senior Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage ,implement & support risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. The Team Internal Audit As part of Digital Internal Audit team, you will be part of our USI Internal Audit practice and will be responsible for scaling digital capabilities for our IA clients. Responsibilities will include helping our clients adopt digital through various stages of their Internal Audit lifecycle, from planning till reporting. We help organizations enhance their digital footprint in the Internal Audit space by adopting a digital approach. Through digital, Internal Audit groups can not only perform its traditional duties better, but can broaden its mandate and sphere of influence to provide actionable insights to management through the anticipation of both risks and opportunities. Work you’ll do The key job responsibilities will be to: Analyze client requirements, perform gap analysis, design and develop digital solutions using data analytics tools and technologies to solve client needs Design, build, and implement custom AI and Generative AI solutions tailored to meet specific business needs Utilize advanced machine learning techniques to develop/train AI models and fine-tune Gen-AI models. Continuously monitor and improve the performance of AI models to ensure they meet the desired performance metrics Deploy solution across various cloud and on-prem computing platforms Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Conduct data analysis, preprocessing, and modeling to extract valuable insights and drive data-driven decision-making. Provide technical expertise and thought leadership in the field of AI/Generative AI to guide clients in adopting AI-driven solutions. Stay up to date with the latest advancements in AI and machine learning technologies to incorporate cutting-edge solutions into projects. Required Skills Proficiency in designing and developing generative AI models and AI algorithms utilizing state-of-the-art techniques. Experience in developing and maintaining data pipelines and Retrieval-Augmented Generation (RAG) solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning Experience with cloud services (AWS, Azure, GCP) for deploying AI models and applications. Familiarity with Gen AI application design and architecture Practical experience of evaluating Large-Language Models (LLM) performance (using common metrics) In-depth knowledge of Natural Language Processing (NLP) and its applications in AI solutions. Expertise in handling large multi-modal datasets and applying data preprocessing techniques to ensure data quality and relevance. Ability to analyze and interpret complex data to derive actionable insights and drive decision-making processes Strong knowledge of programming languages like Python and SQL Proficiency in building agent frameworks, RAG pipelines, experience in LLM Frameworks like LangChain or LlamaIndex; implementation exposure to vector databases like Pinecone, Chroma or FAISS Strong understanding of Gen AI techniques and LLMs such as GPT, Gemini, Claude, etc. and Open Source LLMs (Llama, Mistral, etc.) Expertise in Natural Language Processing (NLP), Deep Learning, LLMs & Generative AI (Gen AI) Experience with frameworks such as TensorFlow, PyTorch and Keras Familiar with AI/Gen AI Ethics & Governance frameworks, applications and archetypes Preferred Skills Certification in Cloud (AWS/ Azure / GCP) Certification in AI/ML or Data Analytics Qualification B.Tech/B.E. and/or MBA Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304565 Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary Job title: AI & Generative AI - Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage ,implement & support risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and Robotic Process Automation (RPA) to solve Deloitte’s clients‘ most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Internal Audit As part of Digital Internal Audit team, you will be part of our USI Internal Audit practice and will be responsible for scaling digital capabilities for our IA clients. Responsibilities will include helping our clients adopt digital through various stages of their Internal Audit lifecycle, from planning till reporting. We help organizations enhance their digital footprint in the Internal Audit space by adopting a digital approach. Through digital, Internal Audit groups can not only perform its traditional duties better, but can broaden its mandate and sphere of influence to provide actionable insights to management through the anticipation of both risks and opportunities. Work you’ll do The key job responsibilities will be to: Analyze client requirements, perform gap analysis, design and develop digital solutions using data analytics tools and technologies to solve client needs Design, build, and implement custom AI and Generative AI solutions tailored to meet specific business needs Utilize advanced machine learning techniques to develop/train AI models and fine-tune Gen-AI models. Continuously monitor and improve the performance of AI models to ensure they meet the desired performance metrics Deploy solution across various cloud and on-prem computing platforms Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Conduct data analysis, preprocessing, and modeling to extract valuable insights and drive data-driven decision-making. Provide technical expertise and thought leadership in the field of AI/Generative AI to guide clients in adopting AI-driven solutions. Stay up to date with the latest advancements in AI and machine learning technologies to incorporate cutting-edge solutions into projects. Required Skills Proficiency in designing and developing generative AI models and AI algorithms utilizing state-of-the-art techniques. Experience in developing and maintaining data pipelines and Retrieval-Augmented Generation (RAG) solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning Experience with cloud services (AWS, Azure, GCP) for deploying AI models and applications. Familiarity with Gen AI application design and architecture Practical experience of evaluating Large-Language Models (LLM) performance (using common metrics) In-depth knowledge of Natural Language Processing (NLP) and its applications in AI solutions. Expertise in handling large multi-modal datasets and applying data preprocessing techniques to ensure data quality and relevance. Ability to analyze and interpret complex data to derive actionable insights and drive decision-making processes Strong knowledge of programming languages like Python and SQL Proficiency in building agent frameworks, RAG pipelines, experience in LLM Frameworks like LangChain or LlamaIndex; implementation exposure to vector databases like Pinecone, Chroma or FAISS Strong understanding of Gen AI techniques and LLMs such as GPT, Gemini, Claude, etc. and Open Source LLMs (Llama, Mistral, etc.) Expertise in Natural Language Processing (NLP), Deep Learning, LLMs & Generative AI (Gen AI) Experience with frameworks such as TensorFlow, PyTorch and Keras Familiar with AI/Gen AI Ethics & Governance frameworks, applications and archetypes Preferred Skills Certification in Cloud (AWS/ Azure / GCP) Certification in AI/ML or Data Analytics Qualification B.Tech/B.E. and/or MBA Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304561 Show more Show less

Posted 1 month ago

Apply

11.0 - 15.0 years

20 - 30 Lacs

Pune, Chennai, Bengaluru

Work from Office

ONLY 30 DAYS JOINERS Hello I hope you're doing well. We are currently hiring for an exciting opportunity with one of our top clients in the field of Generative AI . Based on your background, I believe this role could be a strong match for your expertise. Role Highlights: Location: Bangalore/Chennai/Kolkata/Pune/Hyderabad Experience Required: 11 -16 Years Notice Period: Maximum 30 Days CTC: As per market standards Mandatory Skills: Gen AI, LLM, ML/DL/NLP, RAG, LangChain, Mistral, Llama, Hugging Face, Python, TensorFlow, PyTorch, Django, Vector DB Preferred Skills: GCP/Azure/AWS, Databricks, MLOps (Kubeflow/Mlflow), Kubernetes, GitHub/Bitbucket, ADO, GPT-4 Key Responsibilities: Develop and implement advanced Generative AI models. Apply ML/DL algorithms for real-world applications across NLP and vision domains. Collaborate on data preprocessing, model training, and deployment. Drive innovation by staying current with industry trends and R&D in Gen AI. Ensure responsible AI practices and performance monitoring post-deployment. Mentor junior members and lead AI-driven project initiatives. If you're interested or know someone who fits this role, please share your updated resume and current CTC/Notice Period details. WhatsApp: 987153039 Email: shweta.gupta@sspearhead.com Warm Regards SHWETA GUPTA || Senior Recruitment Specialist Spearhead Professional Services Call/ WhatsApp: 9871530393 LinkedIn Profile: shwetagupta1810

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

***Immediate requirement*** Job Title: AI Engineer with LLM Salary Range: 13 LPA -15 LPA No. of years of experience: 5+ years Job Type: Contract Contract Duration: 12 months (potential to extend) Location: Gurugram, Work Type: Hybrid Start Date: Immediate (Notice period/joining within 1-2 weeks) We are looking for an experienced AI Engineer with a strong background in designing, fine-tuning, and deploying Large Language Models (LLMs). The ideal candidate will have hands-on expertise in NLP, deep learning frameworks (like PyTorch or TensorFlow), prompt engineering, and using APIs or open-source models such as OpenAI, LLaMA, or Mistral. Experience in deploying models to production environments and optimizing for performance is key. Key Skills: LLMs, NLP, Python, Transformers, Prompt Engineering, OpenAI/GPT, Model Fine-Tuning, ML Pipelines, API Integration, MLOps. **Apply only if you can join within 1-2 weeks** Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies