Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0.0 - 3.0 years
0 Lacs
India
On-site
Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0-3 Years Work type: Full-time We are looking for a Machine Learning Engineer to help us deploy, configure, and optimize Mistral 7B , integrate it into our services, and implement Retrieval-Augmented Generation (RAG) for database interactions. Key Responsibilities Configure and optimize Mistral 7B for business tasks (fine-tuning, quantization, performance optimization). Assess required computational resources, select the optimal infrastructure for model deployment (on-premise or cloud), and analyse cost efficiency. Implement RAG to integrate models with vector databases. Orchestrate interactions between multiple ML services (e.g., one model generates tags, and another validates task descriptions). Develop a service for interacting with the model (API for predictions, model management, integration with our application). Optimize model performance for real-world usage. Requirements Experience with LLM models (Mistral 7B, GPT-3/4, LLaMA, Claude, Falcon, Bloom, etc.). Understanding of Retrieval-Augmented Generation (RAG) and model integration with databases. Hands-on experience with fine-tuning and dataset preparation/annotation. Experience with vector databases (Pinecone, Weaviate, FAISS). Strong proficiency in Python and libraries like PyTorch, TensorFlow, Hugging Face, and LangChain. Experience in evaluating and optimizing infrastructure for AI deployments. Experience in developing APIs for integrating AI models into business processes. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title : Senior/ Lead Data Scientist - NLP/Gen AI Location : Chennai, Bangalore, Hyderabad Who we are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified™ (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the ‘Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine Curious about the role? What your typical day would look like? As a Data Scientist specializing in Generative AI and NLP , you will be at the forefront of AI innovation. Your role will involve designing and deploying sophisticated models, including large language models (LLMs), to solve complex business problems. You will work closely with cross-functional teams to create scalable, data-driven solutions that bring AI-driven creativity and intelligence to life across various industries. Generative AI & NLP Development : Design, develop, and deploy advanced applications and solutions using Generative AI models (e.g., GPT, LLaMA, Mistral) and NLP algorithms to solve business challenges and unlock new opportunities for our clients. Model Customization & Fine-Tuning : Apply state-of-the-art techniques like LoRA , PEFT , and fine-tuning of large language models to adapt solutions to specific use cases, ensuring high relevance and impact. Innovative Problem Solving : Leverage advanced AI methodologies to tackle real-world business problems, providing creative and scalable AI-powered solutions that drive measurable results. Data-Driven Insights : Conduct deep analysis of large datasets, uncovering insights and trends that guide decision-making, improve operational efficiencies, and fuel innovation. Cross-Functional Collaboration : Work closely with Consulting, Engineering, and other teams to integrate AI solutions into broader business strategies, ensuring the seamless deployment of AI-powered applications. Client Engagement : Collaborate with clients to understand their unique business needs, provide tailored AI solutions, and educate them on the potential of Generative AI to drive business transformation. What do we expect? Generative AI & NLP Expertise : Extensive experience in developing and deploying Generative AI applications and NLP frameworks , with hands-on knowledge of LLM fine-tuning , model customization , and AI-powered automation . Hands-On Data Science Experience : 6+ years of experience in data science, with a proven ability to build and operationalize machine learning and NLP models in real-world environments. AI Innovation : Deep knowledge of the latest developments in Generative AI and NLP , with a passion for experimenting with cutting-edge research and incorporating it into practical solutions. Problem-Solving Mindset : Strong analytical skills and a solution-oriented approach to applying data science techniques to complex business problems. Communication Skills : Exceptional ability to translate technical AI concepts into business insights and recommendations for non-technical stakeholders. You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities Show more Show less
Posted 2 weeks ago
0.0 - 12.0 years
0 Lacs
Kolkata, West Bengal
On-site
Job Information Date Opened 06/03/2025 Job Type Full time Industry Technology City Kolkata State/Province West Bengal Country India Zip/Postal Code 700091 Job Description We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. You'll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Key Responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Requirements Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). Benefits What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcedeskglobal.com . Include a short note on your most interesting AI project or voicebot/conversational AI experience. Experience: 5 – 12 years Salary Range: 40 – 60 Lacs P.A. Location: Kolkata , Sector V
Posted 2 weeks ago
3.0 years
0 Lacs
Delhi, India
On-site
Job Title : GenAI / ML Engineer Function : Research & Development Location : Delhi/Bangalore (3 days in office) About the Company: Elucidata is a TechBio Company headquartered in San Francisco. Our mission is to make life sciences data AI-ready. Elucidata's Elucidata’s LLM-powered platform Polly, helps research teams wrangle, store, manage and analyze large volumes of biomedical data. We are at the forefront of driving GenAI in life sciences R&D across leading BioPharma companies like Pfizer, Janssen, NextGen Jane and many more. We were recognised as the 'Most Innovative Biotech Company, 2024', by Fast Company. We are a 120+ multi-disciplinary team of experts based across the US and India. In September 2022, we raised $16 million in our Series A round led by Eight Roads, F-Prime, and our existing investors Hyperplane and IvyCap. About the Role: We are looking for a GenAI / ML Engineer to join our R&D team and work on cutting-edge applications of LLMs in biomedical data processing . In this role, you'll help build and scale intelligent systems that can extract, summarize, and reason over biomedical knowledge from large bodies of unstructured text, including scientific publications, EHR/EMR reports, and more. You’ll work closely with data scientists, biomedical domain experts, and product managers to design and implement reliable GenAI-powered workflows — from rapid prototypes to production-ready solutions. This is a highly strategic role as we continue to invest in agentic AI systems and LLM-native infrastructure to power the next generation of biomedical applications. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI. Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral) for biomedical applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations, and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, bioinformaticians, product teams, and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications : 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases. Hands-on experience with LLM frameworks and tooling (e.g., LangChain, HuggingFace, OpenAI APIs, Transformers). Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices. Strong grasp of ML/DL fundamentals and experience with tools like PyTorch, or TensorFlow. Ability to communicate ideas clearly, iterate quickly, and thrive in a fast-paced, product-driven environment. Good to Have (Preferred but Not Mandatory) Experience working with biomedical or clinical text (e.g., PubMed, EHRs, trial data). Exposure to building autonomous agents using CrewAI or LangGraph. Understanding of knowledge graph construction and integration with LLMs. Experience with evaluation challenges unique to GenAI workflows (e.g., hallucination detection, grounding, traceability). Experience with fine-tuning, LoRA, PEFT, or using embeddings and vector stores for retrieval. Working knowledge of cloud platforms (AWS/GCP) and MLOps tools (MLflow, Airflow etc.). Contributions to open-source LLM or NLP tooling We are proud to be an equal-opportunity workplace and are an affirmative action employer. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 - 0 Lacs
Chennai
Remote
Factana is a leading industrial AI solution provider with offices in Chennai and Chicago. We lead the new Industrial AI and IoT solutions era by partnering with AWS, Azure, and GCP for our global customers across various regions. We are looking for a passionate and hands-on Junior AI Engineer to join our growing AI team. The ideal candidate has experience building and deploying LLM-based applications using modern tools and platforms. You will work across the stack to bring cutting-edge AI systems into production, focusing on Retrieval-Augmented Generation (RAG), Agent Systems, scalable backend services, and cloud-native deployments. Experience Required: 0 – 1 years of experience in Python Development Location: Chennai. Education Required: BE / B. TECH in Computer Engineering, Computer Science, and Artificial Intelligence Role & Responsibilities: · Develop and maintain backend APIs and microservices using FastAPI or Flask . · Design and deploy LLM-based applications leveraging RAG architectures, Agent Systems, GraphRAG, and function/tool-calling workflows. · Integrate with LLM APIs from OpenAI, Anthropic, and others. · Work with vector databases like Pinecone, and PGvector for intelligent retrieval in AI systems. · Build and orchestrate AI pipelines using LangChain, LlamaIndex, and Autogen Studio. · Leverage Azure AI Studio to train, deploy, and monitor AI models in production environments. · Collaborate with cross-functional teams to build end-to-end AI features integrated with scalable cloud infrastructure. Qualification Required: Bachelor's or master's degree in computer science, Artificial Intelligence, or a related field. 0 to 1 year of hands-on experience with the Python API framework and SQL. Strong coding skills in Python, with experience in FastAPI or Flask. Exposure with LLM fine-tuning, prompt engineering, and deploying open-source models (Llama 2, Mistral, etc.). Exposure to knowledge of vector search and retrieval frameworks (LlamaIndex, LangChain). Exposure to cloud platforms, especially Azure and Azure AI Studio. Exposure to Autogen Studio for agentic workflow development and orchestration. Knowledge of ML tools such as PyTorch, Transformers, and Hugging Face. Experience with programming languages such as JavaScript, Python, or Node.js. Familiarity with testing practices. Please note that we may not reply to all submissions due to the high volume of applications. Only qualified candidates will receive an email response and a call. Send your resume to: jobs@factana.com Job Types: Full-time, Fresher Pay: ₹18,000.00 - ₹25,000.00 per month Benefits: Paid sick time Paid time off Work from home Work Location: In person
Posted 2 weeks ago
6.0 - 8.0 years
20 - 30 Lacs
Thāne
On-site
Key Responsibilities: Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Required Skills and Experience: Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores ( FAISS, Pinecone, Weaviate, Qdrant ). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Experience 6-8 years of experience in AI/ML roles, focusing on LLM agent development, data science workflows, and system deployment. Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Work Location: In person
Posted 2 weeks ago
8.0 years
0 Lacs
Uttar Pradesh, India
On-site
Job Description Be part of the solution at Technip Energies and embark on a one-of-a-kind journey. You will be helping to develop cutting-edge solutions to solve real-world energy problems. About us: Technip Energies is a global technology and engineering powerhouse. With leadership positions in LNG, hydrogen, ethylene, sustainable chemistry, and CO2 management, we are contributing to the development of critical markets such as energy, energy derivatives, decarbonization, and circularity. Our complementary business segments, Technology, Products and Services (TPS) and Project Delivery, turn innovation into scalable and industrial reality. Through collaboration and excellence in execution, our 17,000+ employees across 34 countries are fully committed to bridging prosperity with sustainability for a world designed to last. About the role: We are currently seeking an AI Solution Architect , to join our team based in Noida. Key Responsibilities Design and architect enterprise-grade AI solutions with emphasis on transformer architectures and generative AI systems Develop and implement strategies for training, fine-tuning, and deploying open-source LLMs (Large Language Models) Implement cost-efficient and low-latency architectures for LLM inference services Build secure API frameworks for generative AI data transmission, processing, and reception Design optimized pipelines for processing multimodal data including text, images, and video for vector embeddings Lead technical discovery sessions with stakeholders to translate business requirements into AI solution designs Create detailed technical specifications, reference architectures, and implement roadmaps Engineer scalable solutions capable of handling increased request volumes and data storage needs Develop MVPs from proof-of-concepts, accelerating the development of the lifecycle of AI products Provide technical leadership for AI development teams using agile methodologies About you: 8+ years of experience in software development with at least 5 years focused on AI/ML solutions Extensive experience with transformer-based models (Anthropic, GPT, T5, LLaMA, Mistral) and generative AI technologies Proven expertise in fine-tuning and deploying open-source LLMs for production environments Deep knowledge of vector databases (Pinecone, Weaviate, Milvus, FAISS) and retrievalaugmented generation Strong proficiency in the Azure AI ecosystem, including Azure OpenAI Service, Azure Machine Learning, and Azure Cognitive Services Experience with LLM optimization techniques including quantization, distillation, and prompt engineering Expertise in designing and implementing secure API frameworks with JWT, OAuth, and API gateways Demonstrated ability to create low-latency, high-throughput AI systems using efficient orchestration Hands-on experience with containerization (Docker), orchestration (Kubernetes), and microservices architectures Proficiency in Python and AI frameworks such as PyTorch, TensorFlow, Hugging Face Transformers, and LangChain Experience with MLOps practices and CI/CD pipelines for model deployment and monitoring Strategic thinking to align AI solutions with broader business objectives and customer needs Collaborative approach to problem-solving with adaptability to rapidly evolving technologies Preferred Qualifications Experience with multi-modal AI systems integrating vision and language capabilities Knowledge of embedding models (CLIP, SBERT, Ada) and their applications Expertise in RAG (Retrieval-Augmented Generation) architecture and implementations Experience with Azure Kubernetes Service (AKS) for model deployment Familiarity with vector search optimization and semantic caching strategies Background in implementing AI guardrails and safety measures for generative AI systems Experience with streaming inference and real-time AI processing Knowledge of distributed training techniques and infrastructure Expertise in GPU/TPU utilization optimization for AI workloads Experience with enterprise data governance and compliance requirements for AI systems Creative perspective for presenting AI strategies and roadmaps to stakeholders with illustrative flow diagrams & engaging content Customer-focused mindset with emphasis on delivering tangible business outcomes Intellectual curiosity and passion for staying current with emerging AI technologies and implementing PoC level solutions to accelerate and inculcate these solutions in development team. Your career with us: Working at Technip Energies is an inspiring journey, filled with groundbreaking projects and dynamic collaborations. Surrounded by diverse and talented individuals, you will feel welcomed, respected, and engaged. Enjoy a safe, caring environment where you can spark new ideas, reimagine the future, and lead change. As your career grows, you will benefit from learning opportunities at T.EN University, such as The Future Ready Program, and from the support of your manager through check-in moments like the Mid-Year Development Review, fostering continuous growth and development What’s next? Once receiving your application, our Talent Acquisition professionals will screen and match your profile against the role requirements. We ask for your patience as the team completes the volume of applications with reasonable timeframe. Check your application progress periodically via personal account from created candidate profile during your application. We invite you to get to know more about our company by visiting and follow us on LinkedIn, Instagram, Facebook, X and YouTube for company updates. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Design, develop, and deploy NLP systems using advanced LLM architectures (e.g., GPT, BERT, LLaMA, Mistral) tailored for real-world applications such as chatbots, document summarization, Q&A systems, and more. Implement and optimize RAG pipelines, combining LLMs with vector search engines (e.g., FAISS, Weaviate, Pinecone) to create context-aware, knowledge-grounded responses. Integrate external knowledge sources, including databases, APIs, and document repositories, to enrich language models with real-time or domain-specific information. Fine-tune and evaluate pre-trained LLMs, leveraging techniques like prompt engineering, LoRA, PEFT, and transfer learning to customize model behavior. Collaborate with data engineers and MLOps teams to ensure scalable deployment and monitoring of AI services in cloud environments (e.g., AWS, GCP, Azure). Build robust APIs and backend services to serve NLP/RAG models efficiently and securely. Conduct rigorous performance evaluation and model validation, including accuracy, latency, bias/fairness, and explainability (XAI). Stay current with advancements in AI research, particularly in generative AI, retrieval systems, prompt tuning, and hybrid modeling strategies. Participate in code reviews, documentation, and cross-functional team planning to ensure clean and maintainable code. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Computer vision, STT, Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycle—from data preparation and model training to deployment and optimization—with a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Computer Vision Development Design and implement advanced computer vision models for object detection, image segmentation, tracking, facial recognition, OCR, and video analysis. Fine-tune and deploy vision models using frameworks like PyTorch, TensorFlow, OpenCV, Detectron2, YOLO, MMDetection , etc. Optimize inference pipelines for real-time vision processing across edge devices, GPUs, or cloud-based systems. Speech-to-Text (STT) System Development Build and fine-tune ASR (Automatic Speech Recognition) models using toolkits such as Whisper, NVIDIA NeMo, DeepSpeech, Kaldi, or wav2vec 2.0 . Develop multilingual and domain-specific STT pipelines optimized for real-time transcription and high accuracy. Integrate STT into downstream NLP pipelines or agentic systems for transcription, summarization, or intent recognition. LLM and Agentic AI Design & Development Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying computer vision and STT AI application . Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Are you ready to be a catalyst in revolutionizing the software development landscape? At IgniteTech, we're spearheading the integration of cutting-edge GenAI technologies to transform how software is crafted and deployed. This isn't just about tweaking the old ways—it's about leading groundbreaking AI initiatives that redefine efficiency and competitiveness in the industry. This role is perfect for those who thrive at the intersection of technology and innovation. You're not here to merely perform tasks; you're here to strategize, research, prototype, and apply AI to create the next wave of software solutions. We seek visionaries who excel in AI and software architecture and are eager to harness technology for transformative business solutions. As an AI Engineer, you'll be instrumental in developing the frameworks and strategies that weave AI into our software solutions, directly influencing our deliverables' quality and productivity. Join us if you're passionate about advancing technology and making a significant impact in the software industry. What You Will Be Doing Craft the high-level architecture for AI-powered systems that automate complex software development tasks, including architecture generation, dependency management, and predictive coding. Stay at the forefront of AI and software engineering advancements by participating in R&D projects, attending industry events, and engaging with both academic and professional communities. Be among the first to discover and utilize emerging AI technologies. What You Won’t Be Doing Routine Code Maintenance: Engaging in the regular updating or debugging of existing software to address minor issues or enhance functionality. Standard Software Development: Performing basic coding or development tasks that involve implementing simple features without AI integration. Artificial Intelligence Engineer Key Responsibilities Leverage your engineering skills to significantly enhance development speed, minimize human error, and improve code quality, thus accelerating time-to-market for software products and boosting customer satisfaction. Basic Requirements Minimum of 3 years of experience as a software engineer driving impactful initiatives. Proficiency with GenAI code assistants (e.g., Github Copilot, Cursor.sh, v0.dev). Proven track record in successfully deploying Generative AI products, projects, solutions, or tools. Experience in utilizing various LLMs (e.g., GPT, Claude, Mistral) to address business challenges. About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5269-IN-Hyderaba-ArtificialInte.005 Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Role Overview We're looking for a talented Machine Learning Engineer with 3+ years of experience to join our growing AI team. This role will play a central part in developing our real-time feedback engine, integrating and fine-tuning LLMs, and spearheading the training and deployment of custom and small language models (SLMs). Key Responsibilities Build and deploy scalable real-time inference systems using FastAPI and AWS. Fine-tune and integrate large language models (LLMs) like Claude 3.5 Sonnet via Amazon Bedrock. Lead or contribute to the training, fine-tuning, and evaluation of proprietary Small Language Models (SLMs). Build ML pipelines for preprocessing multimodal data (audio, transcript, slides). Collaborate with backend, design, and product teams to bring intelligent features into production. Optimize models for speed, efficiency, and edge/cloud deployment. Contribute to MLOps infrastructure for versioning, deployment, and monitoring of models. Required Qualifications 3+ years of experience in applied ML, including model deployment and training. Strong proficiency in Python, PyTorch, Transformers, and model training frameworks. Experience deploying APIs using FastAPI and integrating models in production. Experience fine-tuning LLMs (e.g., OpenAI, Claude, Mistral) and SLMs for specific downstream tasks. Familiarity with AWS services (S3, Bedrock, Lambda, SageMaker). Strong grasp of data pipelines, performance metrics, and model evaluation. Comfort working with multimodal datasets (text, audio, visual). Bonus Points Experience with Whisper, TTS systems (e.g., Polly), or audio signal processing. Background in building or fine-tuning SLMs for performance-constrained environments. Familiarity with MLOps tooling (MLflow, Weights & Biases, DVC). Experience with Redis, WebSockets, or streaming data systems. Perks and Benefits Attractive remuneration (competitive with market, based on experience and potential). Fully remote work – we’re a truly distributed team. Flexible working hours – we value output over clocking in. Health & wellness perks, learning stipends, and regular team retreats. Opportunity to work closely with passionate founders and contribute from Day 1. Be a core part of a fast-growing, high-impact startup with a mission to transform education. We look forward to receiving your application! Show more Show less
Posted 2 weeks ago
2.0 years
0 - 0 Lacs
India
On-site
About the Role We are looking for a LLM (Large Language Models) Engineer to design, build, and optimize intelligent agents powered by Large Language Models (LLMs). You will work on cutting-edge AI applications , pre-train LLMs, fine-tune open-source models, integrate multi-agent systems, and deploy scalable solutions in production environments. Key Responsibilities – (Must Have) Develop and fine-tune LLM-based modesl and AI agents for automation, reasoning, and decision-making. Build multi-agent systems that coordinate tasks efficiently. Design prompt engineering, retrieval-augmented generation (RAG), and memory architectures . Optimize inference performance and reduce hallucinations in LLMs. Integrate LLMs with APIs, databases, and external tools for real-world applications . Implement reinforcement learning with human feedback (RLHF) and continual learning strategies. Collaborate with research and engineering teams to enhance model capabilities. Requirements 2+ years in AI/ML, with at least 2+ years in LLMs, or AI agents . Strong experience in Python, LangChain, LlamaIndex, Autogen, Hugging Face, etc. Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Hands-on experience in LLM deployments with strong inference capabilities using robust frameworks such as vLLM. building multi-modal RAG systems. Knowledge of vector databases (FAISS, Chroma) for retrieval-based systems. Experience with LLM fine-tuning, downscaling, prompt engineering, and model inference optimization . Familiarity with multi-agent systems, cognitive architectures, or autonomous AI workflows . Expertise in cloud platforms (AWS, GCP, Azure) and scalable AI deployments . Strong problem-solving and debugging skills. Nice to Have Contributions to AI research, GitHub projects, or open-source communities . Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Knowledge of Neural Symbolic AI, AutoGPT, BabyAGI, or similar frameworks . Job Type: Full-time Pay: ₹30,000.00 - ₹60,000.00 per month Benefits: Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person Speak with the employer +91 9341725427
Posted 2 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Sag Infotech Pvt. Ltd is looking for a Artificial Intelligence Engineer SAG Infotech Private is a Jaipur based IT development and service company that specializes in accounting software products and services. Founded in 1999, Our organization is committed to providing high-quality accounting software, and service for professionals like Chartered Accountants (CA), Company Secretaries (CS), HR Managers, and more. Our software, including Genius and Gen GST software, is being used by thousands of professional companies and individuals around the country. Apart from accounting software, we provide web and mobile development services to various other industries. Recently SAG Infotech has also come up with SAG RTA, Rajasthan's first Registrar and Share Transfer Agent and category 1st RTA Services Provider in Jaipur.' SAG Infotech product portfolio also includes SDMT LCAP/LCNC Platform for Website and Apps Development. It is a cutting-edge platform utilizing Java, Angular, and practical IDEs, frameworks , and development tools. It offers a low-code no-code approach, empowering users to visually build applications with minimal coding and enhancing efficiency for both technical and non-technical users. Tasks We are looking for an experienced Data Scientist / AI Developer with a strong foundation in classical machine learning, deep learning, natural language processing (NLP), and generative AI. You will be responsible for designing and implementing AI models, including fine-tuning large language models (LLMs), and developing innovative solutions to solve complex problems in a variety of domains. Key Responsibilities: Develop and implement machine learning models and deep learning algorithms for various use cases. Work on NLP projects involving text classification, language modelling, entity recognition, and sentiment analysis. Leverage generative AI techniques to create innovative solutions and models for content generation, summarization, and translation tasks. Fine-tune large language models (LLMs) to optimize performance for specific tasks or applications. Collaborate with cross-functional teams to design AI-driven solutions that address business problems. Analyse large-scale datasets, perform data pre-processing, feature engineering, and model evaluation. Stay updated with the latest advancements in AI, ML, NLP , and LLMs to continuously improve models and methodologies. Present findings and insights to stakeholders in a clear and actionable manner. Build and maintain end-to-end machine learning pipelines for scalable deployment. Required Skills: Strong expertise in supervised and unsupervised machine learning techniques. Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid experience in Natural Language Processing (NLP) , including tokenization, embeddings, and sequence modelling. Hands-on experience with generative AI models and their practical applications. Proven ability to fine-tune large language models (LLMs) for specific tasks. Strong programming skills in Python and familiarity with libraries like Scikit-learn, NumPy, and pandas. Experience in handling large datasets and working with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Deep expertise in computer vision, including techniques for object detection, image segmentation, image classification, and feature extraction. Strong problem-solving skills, analytical thinking, and attention to detail. Preferred Skills: Proven experience in fine-tuning LLMs (like llama series, mistral) for specific tasks and optimizing their performance. Expertise in computer vision techniques, including object detection, image segmentation, and classification. Proficiency with YOLO algorithms and other state-of-the-art computer vision models. Hands-on experience in building and deploying models in real-time applications or production environments. Qualifications: 5+ years of relevant experience in AI, ML, NLP, or related fields. Bachelors or Masters degree in Computer Science , Statistics , or a related discipline. Location: Jaipur (WFO) Candidate Preferably from Jaipur. Requirements Key Responsibilities: Develop and implement machine learning models and deep learning algorithms for various use cases. Work on NLP projects involving text classification, language modelling, entity recognition, and sentiment analysis. Leverage generative AI techniques to create innovative solutions and models for content generation, summarization, and translation tasks. Fine-tune large language models (LLMs) to optimize performance for specific tasks or applications. Collaborate with cross-functional teams to design AI-driven solutions that address business problems. Analyse large-scale datasets, perform data pre-processing, feature engineering, and model evaluation. Stay updated with the latest advancements in AI, ML, NLP , and LLMs to continuously improve models and methodologies. Present findings and insights to stakeholders in a clear and actionable manner. Build and maintain end-to-end machine learning pipelines for scalable deployment. Required Skills: Strong expertise in supervised and unsupervised machine learning techniques. Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid experience in Natural Language Processing (NLP) , including tokenization, embeddings, and sequence modelling. Hands-on experience with generative AI models and their practical applications. Proven ability to fine-tune large language models (LLMs) for specific tasks. Strong programming skills in Python and familiarity with libraries like Scikit-learn, NumPy, and pandas. Experience in handling large datasets and working with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Deep expertise in computer vision, including techniques for object detection, image segmentation, image classification, and feature extraction. Strong problem-solving skills, analytical thinking, and attention to detail. Preferred Skills: Proven experience in fine-tuning LLMs (like llama series, mistral) for specific tasks and optimizing their performance. Expertise in computer vision techniques, including object detection, image segmentation, and classification. Proficiency with YOLO algorithms and other state-of-the-art computer vision models. Hands-on experience in building and deploying models in real-time applications or production environments. Qualifications: 5+ years of relevant experience in AI, ML, NLP, or related fields. Bachelors or Masters degree in Computer Science , Statistics , or a related discipline. Location: Jaipur (WFO) Candidate Preferably from Jaipur. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
AI Agent Development - Python (CrewAI + LangChain) Location: Noida / Gwalior (On-site) Experience Required: Minimum 3+ years Employment Type: Full-time 🚀 About the Role We're seeking a AI Agent Developer (Python) with hands-on experience in CrewAI and LangChain to join our cutting-edge AI product engineering team. If you thrive at the intersection of LLMs, agentic workflows, and autonomous tooling — this is your opportunity to build real-world AI agents that solve complex problems at scale. You’ll be responsible for designing, building, and deploying intelligent agents that leverage prompt engineering, memory systems, vector databases, and multi-step tool execution strategies. 🧠 Core Responsibilities Design and develop modular, asynchronous Python applications using clean code principles. Build and orchestrate intelligent agents using CrewAI: defining agents, tasks, memory, and crew dynamics. Develop custom chains and tools using LangChain (LLMChain, AgentExecutor, memory, structured tools). Implement prompt engineering techniques like ReAct, Few-Shot, and Chain-of-Thought reasoning. Integrate with APIs from OpenAI, Anthropic, HuggingFace, or Mistral for advanced LLM capabilities. Use semantic search and vector stores (FAISS, Chroma, Pinecone, etc.) to build RAG pipelines. Extend tool capabilities: web scraping, PDF/document parsing, API integrations, and file handling. Implement memory systems for persistent, contextual agent behavior. Leverage DSA and algorithmic skills to structure efficient reasoning and execution logic. Deploy containerized applications using Docker, Git, and modern Python packaging tools. 🛠️ Must-Have Skills Python 3.x (Async, OOP, Type Hinting, Modular Design) CrewAI (Agent, Task, Crew, Memory, Orchestration) – Must Have LangChain (LLMChain, Tools, AgentExecutor, Memory) Prompt Engineering (Few-Shot, ReAct, Dynamic Templates) LLMs & APIs (OpenAI, HuggingFace, Anthropic) Vector Stores (FAISS, Chroma, Pinecone, Weaviate) Retrieval-Augmented Generation (RAG) Pipelines Memory Systems: BufferMemory, ConversationBuffer, VectorStoreMemory Asynchronous Programming (asyncio, LangChain hooks) DSA / Algorithms (Graphs, Queues, Recursion, Time/Space Optimization) 💡 Bonus Skills Experience with Machine Learning libraries (Scikit-learn, XGBoost, TensorFlow basics) Familiarity with NLP concepts (Embeddings, Tokenization, Similarity scoring) DevOps familiarity (Docker, GitHub Actions, Pipenv/Poetry) 🧭 Why Join Us? Work on cutting-edge LLM agent architecture with real-world impact. Be part of a fast-paced, experiment-driven AI team. Collaborate with passionate developers and AI researchers. Opportunity to build from scratch and influence core product design. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Contract Role: Data Scientist (Remote, Evening Shift) Duration: 2 Months Shift: 5 PM – 9 PM IST (Remote only) Experience: 8–10 years Start: Immediate Eligibility: Only candidates currently working remotely Key Skills Required: -LLMs & GenAI: GPT, Claude, Mistral, prompt engineering, fine-tuning -RAG: Vector DBs (FAISS, Pinecone), retrieval pipelines -Agentic AI: LangChain Agents, AutoGPT, multi-agent systems -LLMOps: Deployment (FastAPI, Docker), monitoring (Langfuse, PromptLayer) -Vendor Tools: OpenAI, Anthropic, Google, AWS, Azure -Databases: SQL, NoSQL, knowledge bases -Solutioning & Deployment: End-to-end delivery, API integration Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
India
Remote
Prime is a cutting-edge Edtech startup focused on building intelligent, autonomous AI agents that collaborate in multi-agent systems. We create agent-based architectures that enable autonomous decision-making and seamless cooperation to solve complex problems. Join us to help pioneer the future of decentralized AI! We are a fast-growing Edtech company driven by innovation, collaboration, and adaptability. Our mission is to deliver cutting-edge solutions that align with market demands and technical feasibility. Role Overview As a Multi-Agent Systems Architect at Prime Corporate, you will design and develop multi-agent architectures that empower AI agents to work together autonomously. You will be responsible for creating scalable, robust systems that enable agents to communicate, negotiate, and collaborate effectively, driving innovation in AI-driven automation. Key Responsibilities • Design and implement multi-agent system architectures that enable autonomous decision- making and collaboration among AI agents. • Develop agent-based frameworks that support task allocation, communication protocols, and coordination strategies. • Build and optimize agent communication layers using APIs, vector databases, and messaging protocols. • Integrate large language models (LLMs) and other AI components into agent workflows to enhance capabilities. • working directly with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, etc.). • Collaborate closely with product, engineering, and research teams to translate business requirements into technical solutions. • Ensure scalability, reliability, and fault tolerance of multi-agent systems in production environments. • Continuously research and apply the latest advances in multi-agent systems, decentralized AI, and autonomous agents. • Document architecture designs, workflows, and implementation details clearly for team collaboration and future reference. What We’re Looking For: • Practical experience designing and building multi-agent systems or agent-based architectures. • Proficiency in Python and familiarity with AI/ML frameworks (e.g., LangChain, AutoGen, HuggingFace). • Understanding of decentralized control, agent communication protocols, and emergent system design. • Experience with cloud platforms (AWS, GCP, Azure) and API integrations. • Strong problem-solving skills and ability to work independently in a remote startup environment. • No formal degree required - your skills, projects, and passion matter most. Location - 100% Remote Experience - 0-1 year Compensation Structure: This role follows a structured pathway designed to prepare candidates for the responsibilities of a full-time position. • Pre-Qualification Internship (Mandatory): • Duration: 2 months • Stipend: ₹5,000/month • Objective: To evaluate foundational skills, work ethic, and cultural fit within the organization. • Internship (Mandatory) • Duration: 4 months • Stipend: ₹5,000–₹15,000/month (based on performance during the pre-qualification internship) Why Join Prime Corporate ? • Work remotely with a passionate, innovative startup. • Contribute to pioneering multi-agent AI systems shaping the future of autonomous technology. • Grow your career from internship to full-time with competitive pay and equity opportunities. • Career Growth: Prove your potential and secure a full-time role with competitive compensation. Note: This is not a direct full-time job opportunity. Candidates must commit to our mandatory two- stage internship process. If you’re genuinely interested in joining us, we’d love to hear from you! Ready to build the future of autonomous AI? Apply now and join Prime Corporate mission! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description As an AI Intern (AI Agent Developer Intern), you'll work at the cutting edge of AI automation and agent orchestration. This isn’t a research role—you’ll be building production-grade AI agents that solve real-world problems across domains like operations, education, sales, and personal productivity. You’ll work closely with our leadership team to prototype, iterate, and ship fast, owning your work from start to finish. Key Responsibilities Design and develop AI Agents using LangChain, CrewAI, LangGraph, AutoGen (A2A), ADK and similar other platforms Build and orchestrate agent workflows using tools like n8n, Airflow, or LangGraph DAGs Integrate APIs, databases, vector stores (e.g., Pinecone, Chroma, Weaviate), and tools like OpenAI, Claude, Mistral, HuggingFace, etc. Write clean, modular, and testable code in Python (TypeScript is a plus) Rapidly prototype agentic workflows, test edge-cases, and iterate based on feedback Collaborate with design and product teams to shape user-centric workflows Ship minimal, usable versions and continuously improve them based on usage analytics Maintain documentation and contribute to internal knowledge bases Requirements Strong hands-on coding skills, especially in Python Previous experience building with LangChain, LangGraph, CrewAI, or similar frameworks Familiar with agent orchestration, tool chaining, multi-agent systems Quick learner, adaptable to new frameworks and libraries Strong problem-solving skills and autonomy—able to take vague ideas to shipped features Comfortable working in a fast-paced, iterative startup environment Basic understanding of prompt engineering, LLM APIs, and memory management Exposure to n8n, Supabase, Firebase, PostgreSQL, Redis, or similar stacks Familiarity with version control (Git), basic CI/CD, and cloud platforms (Vercel, AWS, etc.) Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Gen AI Architect – Retail Technology Office About The Role We are seeking a visionary and hands-on Gen AI Architect to lead the design, development, and deployment of cutting-edge Generative AI and Agentic AI solutions in the Retail domain. This role is pivotal in shaping our AI strategy, building innovative offerings, and driving client success through impactful PoCs and solutions. The architect will work closely with the CTO and the Retail strategy office to define the AI roadmap, lead a team of engineers and architects, and represent the company in client engagements and industry forums. Key Responsibilities Architect and Lead the development of GenAI and Agentic AI solutions tailored for retail use cases (e.g., personalized shopping, intelligent agents, supply chain optimization). Collaborate with the CTO to define AI strategy, solution blueprints, and innovation roadmaps. Design and Build scalable PoCs, MVPs, and production-grade solutions using state-of-the-art GenAI tools and frameworks. Lead and Mentor a team of engineers and architects, fostering a culture of innovation, quality, and continuous learning. Engage with Clients to present PoVs, conduct workshops, and articulate the value of GenAI solutions with compelling storytelling and technical depth. Ensure Engineering Excellence by applying best practices in TDD, BDD, DevSecOps, and microservices architecture. Required Skills & Experience AI/ML & GenAI Expertise Proven hands-on experience with: LLMs (e.g., OpenAI, Claude, Mistral, LLaMA, Gemini) GenAI frameworks: LangChain, LlamaIndex, Haystack, Semantic Kernel Agentic AI tools: Google Agentspace, ADK, LangGraph, AutoGen, CrewAI, MetaGPT, AutoGPT, OpenAgents Vector databases: Vertex AI, FAISS, Weaviate, Pinecone, Chroma Prompt engineering, RAG pipelines, fine-tuning, and orchestration Engineering & Architecture Strong background in: Java Spring Boot, REST APIs, Microservices Cloud platforms: AWS, Azure, or GCP CI/CD, DevSecOps, TDD/BDD Containerization (Docker, Kubernetes) Leadership & Communication Exceptional storytelling and articulation skills to convey complex AI concepts to technical and non-technical audiences. Experience in client-facing roles, including workshops, demos, and executive presentations. Ability to lead cross-functional teams and drive innovation in a fast-paced environment. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field Certifications in AI/ML and Cloud (GCP or Azure) Experience in the Retail domain or consumer-facing industries is a strong plus Why Join Us? Be at the forefront of AI innovation in Retail Work with a visionary leadership team Build solutions that impact Cognizant’s Retail clients Enjoy a collaborative, inclusive, and growth-oriented culture Show more Show less
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-205390 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Feb. 13, 2025 CATEGORY: Information Systems At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will will be at the forefront of innovation, using their skills to design and implement pioneering AI/Gen AI solutions. With an emphasis on creativity, collaboration, and technical excellence, this role provides a unique opportunity to work on ground-breaking projects that enhance operational efficiency at the Amgen Technology and Innovation Centre while ensuring the protection of critical systems and data. Roles & Responsibilities: Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI API, Open Source LLMs (Llama2, Mistral, Mixtral), and frameworks like Langchain and Haystack. Design and implement AI & GenAI solutions that drive productivity across all roles in the software development lifecycle. Demonstrate the ability to rapidly learn the latest technologies and develop a vision to embed the solution to improve the operational efficiency within a product team Collaborate with multi-functional teams (product, engineering, design) to set project goals, identify use cases, and ensure seamless integration of Gen AI solutions into current workflows. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Programming Languages such as Java and Python experience OR Bachelor’s degree and 3 to 5 years of Programming Languages such as Java and Python experience OR Diploma and 7 to 9 years of Programming Languages such as Jav and Python experience Preferred Qualifications: Proficiency in programming languages such as Python and Java. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Experience working with RAG technologies and LLM frameworks, LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases Familiarity with cloud security (AWS /Azure/ GCP) Utilize expertise in integrating and demonstrating Gen AI LLMs to maximize operational efficiency.Productivity Tools and Technology Engineer Good-to-Have Skills: Experience with graph databases (Neo4J and Cypher would be a big plus) Experience with Prompt Engineering and familiarity with frameworks such as Dspy would be a big plus Professional Certifications: AWS / GCP / Databricks Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 weeks ago
12.0 - 14.0 years
2 - 7 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-209745 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Mar. 28, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Director - AI/ML Engineering What you will do Let’s do this. Let’s change the world. We are seeking a Associate Director of ML / AI Engineering to lead Amgen India’s AI engineering practice. This role is integral to developing top-tier talent, setting ML / AI best practices, and evangelizing ML / AI Engineering capabilities across the organization. The Associate Director will be responsible for driving the successful delivery of strategic business initiatives by overseeing the technical architecture, managing talent, and establishing a culture of excellence in ML / AI The key aspects of this role involve : (1) prior hands-on experience building ML and AI solutions (2) management experience in leading ML / AI engineering team and talent development (3) Delivering AI initiatives at enterprise scale Roles & Responsibilities: Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions Create and implement a strategy for expanding the AI/ML engineering team, including recruitment, onboarding, and talent development. Oversee the end-to-end lifecycle of AI/ML projects, from concept and design through to deployment and optimization, ensuring timely and successful delivery. Ensure adoption of ML-Ops best practices, including model versioning, testing, deployment, and monitoring. Collaborate with multi-functional teams, including product, data science, and software engineering, to find opportunities and deliver AI/ML solutions that drive business value. Serve as an AI/ML evangelist across the organization, promoting awareness and understanding of the capabilities and value of AI/ML technologies. Promote a culture of innovation and continuous learning within the team, encouraging the exploration of new tools, technologies, and methodologies. Provide technical leadership and mentorship, guiding engineers in implementing scalable and robust AI/ML systems. Work closely with collaborators to prioritize AI/ML projects and ensure timely delivery of key initiatives. Lead innovation initiatives to explore new AI/ML technologies, platforms, and tools that can drive further advancements in the organization’s AI capabilities. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 12 to 14 years of computer science, Artificial Intelligence, Machine Learning experience OR Bachelor’s degree and 14 to 18 years of computer science, Artificial Intelligence, Machine Learning experience OR Diploma and 18 to 20 years of computer science, Artificial Intelligence, Machine Learning experience Preferred Qualifications: Experience in building AI Platforms & applications at enterprise scale Expertise in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, etc. Hands-on experience with LLMs, Generative AI, and NLP (e.g., GPT, BERT, Llama, Claude, Mistral AI ) Strong understanding of MLOps processes and tools such as MLflow, Kubeflow, or similar platforms. Proficient in programming languages such as Python, R, or Scala. Experience deploying AI/ML models in cloud environments (AWS, Azure, or Google Cloud). Proven track record of managing and delivering AI/ML projects at scale. Excellent project management skills, with the ability to lead multi-functional teams and manage multiple priorities. Experience in regulated industries, preferably life sciences and pharma Good-to-Have Skills: Experience with natural language processing, computer vision, or reinforcement learning. Knowledge of data governance, privacy regulations, and ethical AI considerations. Experience with cloud-native AI/ML services (Databricks, AWS, Azure ML, Google AI Platforms) Experience with AI Observability Professional Certifications (Preferred): Google Professional Machine Learning Engineer, AWS Certified Machine Learning, or Azure AI Engineer Associate, Databricks Certified Generative AI Engineer Associate Soft Skills: Excellent leadership and communication skills, with the ability to convey complex technical concepts to non-technical collaborators. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Ability to mentor and develop team members, promoting their growth and success. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GenAI Architect – Retail Technology Office Location: Chennai or Bangalore About The Role We are seeking a visionary and hands-on GenAI Architect to lead the design, development, and deployment of cutting-edge Generative AI and Agentic AI solutions in the Retail domain. This role is pivotal in shaping our AI strategy, building innovative offerings, and driving client success through impactful PoCs and solutions. The architect will work closely with the CTO and the Retail strategy office to define the AI roadmap, lead a team of engineers and architects, and represent the company in client engagements and industry forums. Key Responsibilities Architect and Lead the development of GenAI and Agentic AI solutions tailored for retail use cases (e.g., personalized shopping, intelligent agents, supply chain optimization). Collaborate with the CTO to define AI strategy, solution blueprints, and innovation roadmaps. Design and Build scalable PoCs, MVPs, and production-grade solutions using state-of-the-art GenAI tools and frameworks. Lead and Mentor a team of engineers and architects, fostering a culture of innovation, quality, and continuous learning. Engage with Clients to present PoVs, conduct workshops, and articulate the value of GenAI solutions with compelling storytelling and technical depth. Ensure Engineering Excellence by applying best practices in TDD, BDD, DevSecOps, and microservices architecture. Required Skills & Experience AI/ML & GenAI Expertise Proven hands-on experience with: LLMs (e.g., OpenAI, Claude, Mistral, LLaMA, Gemini) GenAI frameworks: LangChain, LlamaIndex, Haystack, Semantic Kernel Agentic AI tools: Google Agentspace, ADK, LangGraph, AutoGen, CrewAI, MetaGPT, AutoGPT, OpenAgents Vector databases: Vertex AI, FAISS, Weaviate, Pinecone, Chroma Prompt engineering, RAG pipelines, fine-tuning, and orchestration Engineering & Architecture Strong background in: Java Spring Boot, REST APIs, Microservices Cloud platforms: AWS, Azure, or GCP CI/CD, DevSecOps, TDD/BDD Containerization (Docker, Kubernetes) Leadership & Communication Exceptional storytelling and articulation skills to convey complex AI concepts to technical and non-technical audiences. Experience in client-facing roles, including workshops, demos, and executive presentations. Ability to lead cross-functional teams and drive innovation in a fast-paced environment. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field Certifications in AI/ML and Cloud (GCP or Azure) Experience in the Retail domain or consumer-facing industries is a strong plus Why Join Us? Be at the forefront of AI innovation in Retail Work with a visionary leadership team Build solutions that impact Cognizant’s Retail clients Enjoy a collaborative, inclusive, and growth-oriented culture Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Noida
On-site
Position: AI/ML Lead (CE80SF RM 3261) Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios Technical Skills required: Solid Experience in Time Series Analysis, Anomaly Detection and traditional machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), • Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Candidate Roles and Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. ******************************************************************************************************************************************* Job Category: Embedded HW_SW Job Type: Full Time Job Location: Noida Experience: 8+ years Notice period: 0-15 days
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2