Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
???? We&aposre Hiring: Artificial Intelligence Consultant! ???? We&aposre seeking a highly motivated and technically adept Artificial Intelligence Consultant to join our growing Artificial Intelligence and Business Transformation practice. This role is ideal for a strategic thinker with a strong blend of leadership, business consulting acumen, and technical expertise in Python, LLMs, Retrieval-Augmented Generation (RAG), and agentic systems. Experience Required: Minimum 6+ Years Location: Remote/ Work From Home Job Type: Contract to hire (1 Year /Renewable Contract) Notice Period: Immediate to 15 Days Max Mode of Interview: Virtual Roles And Responsibilities AI Engagements: Independently manage end-to-end delivery of AI-led transformation projects across industries, ensuring value realization and high client satisfaction. Strategic Consulting & Roadmapping: Identify key enterprise challenges and translate them into AI solution opportunities, crafting transformation roadmaps that leverage RAG, LLMs, and intelligent agent frameworks. LLM/RAG Solution Design & Implementation: Architect and deliver cutting-edge AI systems using Python, LangChain, LlamaIndex, OpenAI function calling, semantic search, and vector store integrations (FAISS, Qdrant, Pinecone, ChromaDB). Agentic Systems: Design and deploy multi-step agent workflows using frameworks like CrewAI, LangGraph, AutoGen or ReAct, optimizing tool-augmented reasoning pipelines. Client Engagement & Advisory: Build lasting client relationships as a trusted AI advisor, delivering technical insight and strategic direction on generative AI initiatives. Hands-on Prototyping: Rapidly prototype PoCs using Python and modern ML/LLM stacks to demonstrate feasibility and business impact. Thought Leadership: Conduct market research, stay updated with the latest in GenAI and RAG/Agentic systems, and contribute to whitepapers, blogs, and new offerings. Essential Skills Education : Bachelor&aposs or Masters in Computer Science, AI, Engineering, or related field. Experience : Minimum 6 years of experience in consulting or technology roles, with at least 3 years focused on AI & ML solutions. Leadership Quality: Proven track record in leading cross-functional teams and delivering enterprise-grade AI projects with tangible business impact. Business Consulting Mindset: Strong problem-solving, stakeholder communication, and business analysis skills to bridge technical and business domains. Python & AI Proficiency: Advanced proficiency in Python and popular AI/ML libraries (e.g., scikit-learn, PyTorch, TensorFlow, spaCy, NLTK). Solid understanding of NLP, embeddings, semantic search, and transformer models. LLM Ecosystem Fluency: Experience with OpenAI, Cohere, Hugging Face models; prompt engineering; tool/function calling; and structured task orchestration. Independent Contributor: Ability to own initiatives end-to-end, take decisions independently, and operate in fast-paced environments. Preferred Skills Cloud Platform Expertise: Strong familiarity with Microsoft Azure (preferred), AWS, or GCP including compute instances, storage, managed services, and serverless/cloud-native deployment models. Programming Paradigms: Hands-on experience with both functional and object-oriented programming in AI system design. Hugging Face Ecosystem: Proficiency in using Hugging Face Transformers, Datasets, and Model Hub. Vector Store Experience: Hands-on experience with FAISS, Qdrant, Pinecone, ChromaDB. LangChain Expertise: Strong proficiency in LangChain for agentic task orchestration and RAG pipelines. MLOps & Deployment: CI/CD for ML pipelines, MLOps tools (MLflow, Azure ML), containerization (Docker/Kubernetes). Cloud & Service Architecture: Knowledge of microservices, scaling strategies, inter-service communication. Programming Languages: Proficiency in Python and C# for enterprise-grade AI solution development. Show more Show less
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Responsibilities Design and fine-tune LLMs (Large Language Models) for BFSI use-cases: intelligent document processing, report generation, chatbots, advisory tools. Evaluate and apply prompt engineering, retrieval-augmented generation (RAG), and fine-tuning methods. Implement safeguards, red-teaming, and audit mechanisms for LLM usage in BFSI. Work with data privacy, legal, and compliance teams to align GenAI outputs with industry regulations. Collaborate with enterprise architects to integrate GenAI into existing digital platforms. Qualifications 35 years in AI/ML; 13 years hands-on in GenAI/LLM-based solutions. BFSI-specific experience in document processing, regulatory reporting, or virtual agents using GenAI is highly preferred. Exposure to prompt safety, model alignment, and RAG pipelines is critical. Essential Skills Tech Stack LLMs: GPT (OpenAI), Claude, LLaMA, Mistral, Falcon Tools: LangChain, LlamaIndex, Pinecone, Weaviate Frameworks: Transformers (Hugging Face), PEFT, DeepSpeed APIs: OpenAI, Cohere, Anthropic, Azure OpenAI Cloud: GCP GenAI Studio, GCP Vertex AI Others: Prompt engineering, RAG, vector databases, role-based guardrails Experience 35 years in AI/ML; 13 years hands-on in GenAI/LLM-based solutions. Show more Show less
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are looking for a DevOps Technical Lead who will play a crucial role in leading the development of an Infrastructure Agent powered by Generative AI (GenAI) technology. In this role, you will be responsible for designing and implementing an intelligent Infra Agent that can handle provisioning, configuration, observability, and self-healing autonomously. Your key responsibilities will include leading the architecture and design of the Infra Agent, integrating various automation frameworks to enhance DevOps workflows, automating infrastructure provisioning and incident remediation, developing reusable components and frameworks using Infrastructure as Code (IaC) tools, and collaborating with AI/ML engineers and SREs to create intelligent infrastructure decision-making logic. You will also be expected to implement secure and scalable infrastructure on cloud platforms such as AWS, Azure, and GCP, continuously improve agent performance through feedback loops, telemetry, and model fine-tuning, drive DevSecOps best practices, compliance, and observability, as well as mentor DevOps engineers and work closely with cross-functional teams. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 8 years of experience in DevOps, SRE, or Infrastructure Engineering. You must have proven experience in leading infrastructure automation projects, expertise with cloud platforms like AWS, Azure, GCP, and deep knowledge of tools such as Terraform, Kubernetes, Helm, Docker, Jenkins, and GitOps. Hands-on experience with LLMs/GenAI APIs, familiarity with automation frameworks, and proficiency in programming/scripting languages like Python, Go, or Bash are also required. Preferred qualifications for this role include experience in building or fine-tuning LLM-based agents, contributions to open-source GenAI or DevOps projects, understanding of MLOps pipelines and AI infrastructure, and certifications in DevOps, cloud, or AI technologies.,
Posted 1 week ago
4.0 - 9.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
4+ years of experience in Java development with Spring Boot. Experience integrating AI/ML models into backend systems. Must have experience in any of the GenAI tools like Bedrock/ OpenAi/Tensorflow/Ai21/Anthropic/Cohere/Stability OR Scikit-Learn Strong understanding of RESTful API design and microservices. Familiarity with AI/ML tools and frameworks (e.g., Python, TensorFlow, Scikit-learn). Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of containerization (Docker, Kubernetes) and event-driven architectures. Preferred Qualifications Experience with GenAI platforms (e.g., AWS Bedrock, OpenAI) Understanding of MLOps practices and model lifecycle management. Bachelors or Masters degree in Computer Science, Engineering, or related field.
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
karnataka
On-site
Are you a curious mind eager to build real-world AI solutions At Cygnus, we are on the lookout for smart interns ready to innovate with GenAI. You will have the opportunity to work directly with our CTO and product teams to prototype, fine-tune models, and solve practical business challenges using the latest AI tools. This is a paid internship with a path to full-time employment. As a selected intern, your day-to-day responsibilities will include building and deploying AI/ML models for internal tools and client-facing solutions. You will also get to explore and experiment with LLMs such as OpenAI, Ollama, Cohere, etc. Additionally, you will be automating workflows using Python, LangChain, n8n, or similar platforms. Your role will involve researching new GenAI techniques and applying them to our product stack. Collaboration with Sales, Ops, and Supply teams to ship rapid MVPs will also be a key part of your responsibilities. If you are passionate about AI and excited to shape the future of travel, we would love to hear from you. Let's build something amazing together. Good luck! :) About Company: Cygnus is a Pune-based AI-driven Corporate Travel Management Company that is revolutionizing how businesses manage travel. We exclusively work with corporates, offering a comprehensive one-stop solution for all their travel needs - from flights and hotels to visa, MICE, and cab bookings. Our mission is to replace outdated travel processes with a streamlined, tech-savvy approach that solves real-world challenges faced by travel managers. Backed by strong industry connectivity, a skilled professional team, and a commitment to continuous improvement, Cygnus is redefining corporate travel with innovation and efficiency at its core.,
Posted 1 week ago
4.0 - 9.0 years
11 - 16 Lacs
Kolkata, Bengaluru
Work from Office
Role & responsibilities Building a Prompt Engineers Team requires a thoughtful approach that balances technical expertise, domain knowledge, and creativity. Heres a step-by-step plan to help you build an effective team: Step-by-Step Guide to Building a Prompt Engineering Team 1. Define the Purpose and Scope Start with clarity on: • Why you need a prompt engineering team (e.g., to improve chatbot accuracy, optimize RAG pipelines, generate content, automate processes). • Where prompts will be applied (e.g., customer support, legal research, code generation, marketing content). • What models/tools youre using (e.g., OpenAI GPT-4, Claude, Mistral, LangChain, LlamaIndex, Pinecone, etc.). 2. Identify Key Roles to Hire You dont need only prompt writers — build a balanced team. Typical roles include: Core Prompt Engineers • Focused on crafting, testing, and refining prompts • Excellent with language, logic, and experimentation NLP/LLM Engineers • Technical experts who understand model internals, fine-tuning, embeddings, and vector search • Useful if you’re building complex RAG or multi-agent systems Creative Writers or UX Writers (Optional) • Can write natural, engaging, or persuasive prompts especially for customer-facing apps or content generation Evaluation & QA Specialists • Test prompt outputs for accuracy, bias, safety, and relevance • Help benchmark performance 3. Create a Hiring Strategy Look for Candidates Who: • Understand LLM behavior and limitations • Have experience with OpenAI, Claude, or open-source models • Are comfortable with both creative writing and logical reasoning • Can iterate based on output quality Where to Find Them: • AI/NLP communities on Twitter, Discord (e.g., EleutherAI, LangChain) • LinkedIn with filters like: “Prompt Engineer,” “LLM Developer,” “NLP Engineer” • Platforms like GitHub (search AI prompt repositories), Kaggle, Upwork, Toptal 4. Set Up Your Tech Stack Equip the team with: • LLM Access: OpenAI API, Anthropic, HuggingFace, Cohere • Prompt Testing Tools: PromptLayer, LangSmith, PromptPerfect, Flowise • RAG Stack (if needed): LangChain, LlamaIndex, Chroma, Weaviate, Pinecone • Version Control: GitHub + prompt libraries • Prompt Analytics/Logging: To track changes, feedback loops, hallucinations, and success rates 5. Define a Workflow • Prompt Design Test Evaluate Refine Deploy Monitor • Use prompt templates (Few-shot, Chain-of-Thought, ReAct, etc.) • Implement CI for prompt validation if prompts are part of codebase 6. Build a Knowledge Base Document: • Prompt templates and examples • What works well (prompt tuning guides) • Failure cases and anti-patterns • Team learnings, A/B test results, and prompt libraries 7. Encourage Experimentation LLMs are probabilistic — experimentation is critical. Create a culture where: • Engineers test various prompt formats and model behaviors • Insights are shared internally • Prompt quality is benchmarked over time Preferred candidate profile Bonus: Team Composition Example (Startup Scale) Role Count Notes Prompt Engineer 2 NLP-aware, skilled with prompt design & LLMs NLP/LLM Developer 1 Can build pipelines, evaluate embeddings, tune models QA / Evaluator 1 Reviews and scores AI output for relevance, tone, accuracy Creative Writer (Optional) 1 Especially useful for marketing, support, or storytelling prompts Final Tip: Your first hires should be T-shaped people – with deep LLM experience but broad enough to collaborate across engineering, product, and UX.
Posted 1 week ago
3.0 - 5.0 years
4 - 9 Lacs
Pune
Work from Office
Role & responsibilities Design, prototype, and deploy AI-driven applications leveraging LLMs (GPT-4, Perplexity, Claude, Gemini, etc.) and open-source transformer models. Lead or co-lead end-to-end AI/GenAI solutions : from data ingestion, entity extraction, and semantic search to user-facing interfaces. Implement RAG (Retrieval-Augmented Generation) architectures, knowledge grounding pipelines, and prompt orchestration logic. Fine-tune transformer models (BERT, RoBERTa, T5, LLaMA) on custom datasets for use cases like: Document understanding Conversational AI Question answering Summarization & Topic Modeling Integrate LLM workflows into scalable backend architectures with APIs and frontends. Work closely with business teams and pharma SMEs to translate requirements into GenAI solutions . Mentor junior engineers and contribute to AI capability development across the organization. Tech Stack & Skills Required Programming & Libraries : Python, FastAPI, LangChain, Pandas, PyTorch/TensorFlow, Transformers (HuggingFace), OpenAI SDK. Data Extraction & Processing : PDFMiner, PyMuPDF, Tabula, PyPDF2, Tesseract OCR, python-pptx. Gen AI / LLMs : OpenAI (GPT), Gemini, Perplexity, Cohere, DeepSeek, Mistral, LLaMA, BERT, RoBERTa, T5, Falcon. Use Cases : NER, Summarization, QA, Document Parsing, Clustering, Topic Modeling, QA over docs. Embedding & Vector Databases : Pinecone, FAISS, ChromaDB. RAG & Retrieval Pipelines : LangChain, Haystack, custom retrievers. Frontend/Backend Integration : React (preferred), FastAPI/Flask, REST/GraphQL APIs. Versioning & Deployment : Git, Docker, CI/CD (basic), cloud knowledge is a plus (AWS/GCP/Azure). Preferred candidate profile Degree in Computer Science, Engineering, Data Science, or related field (BE/BTech/MTech/MCA). 35 years of hands-on experience in AI/ML/NLP/LLM solution development. Strong understanding of GenAI, Prompt Engineering, LLM internals , and multi-layered data architectures. Exposure to pharma/healthcare domain is a significant plus. Excellent problem-solving skills, self-learner, and ability to work in cross-functonal teams. Experience developing new applications within an agile environment preferred. Ability to work independently and as part of a team. Exposure to MLOPs will be an added advantage.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
The job is a full-time position located in Coimbatore within a division of Integra Corp, USA and Integra Ltd, UK. As a candidate, you are expected to have 3+ years of professional experience in software development, particularly with advanced proficiency in PHP. Your responsibilities will include developing and maintaining PHP-based web applications, integrating GenAI APIs for tasks such as content generation, summarization, sentiment analysis, chatbots, and automation. It is preferred that you have familiarity with prompt engineering basics and possess good working knowledge of web technologies like HTML5, CSS, Javascript, AJAX, MYSQL, JQuery, Bootstrap, etc. Additionally, knowledge of relational databases, version control tools, and web services development is required. Experience with latest frontend technologies like ReactJS, NodeJS will be an added advantage. You should be able to build efficient, testable, and reusable PHP modules and demonstrate strong optimization and functional programming skills. Strong communication skills are essential for this role. Your responsibilities will involve integrating user-facing elements developed by front-end developers, building efficient PHP modules, solving complex performance problems, and handling architectural challenges. You will also be responsible for integrating data storage solutions which may include databases, key-value stores, blob stores, etc. To impress the company, showcase your previous experience, sound programming principles, good written English skills, enthusiasm, positive attitude, and willingness to learn and adapt. At Integra, you can expect to work with international clients, receive world-class training on multiple skills, and have planned career growth opportunities. The company offers an excellent working atmosphere, ensures salary and bonus are always paid on time, and provides the opportunity to work for a US corporation and a UK company. Integra has continuously grown over the past 21 years and has very supportive senior management. If you are interested in this position, a walk-in interview is offered at the following address: Integra Global Solutions Corp No.1, Palsun Towers, 1st Street, Behind of KVB Bank Tatabad, Sivananda Colony, Coimbatore-641012.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,
Posted 2 weeks ago
4.0 - 5.0 years
12 - 14 Lacs
Hyderabad, Pune
Work from Office
Were Hiring | Python Developer – Generative AI Location: Pune/Hyderabad Experience: 4–5 Years Join Us at the Cutting Edge of AI Innovation! Are you passionate about building products powered by Generative AI and LLMs? Do you enjoy working with cutting-edge tools like OpenAI, LangChain, and vector databases to bring AI ideas to life? We’re looking for a Python Developer (GenAI) who is ready to innovate, collaborate, and help us scale real-world AI solutions from prototype to production. Key Responsibilities: • Build scalable Python applications leveraging LLMs, prompt engineering, and RAG pipelines • Develop REST APIs using FastAPI/Flask for GenAI applications • Collaborate across teams to deploy LLM-powered tools (chatbots, assistants, summarizers) • Work with frameworks like LangChain, Hugging Face, and vector DBs like Pinecone, FAISS • Write clean, modular code with performance optimization in mind Skills We’re Looking For: • 4–5 years in Python development • Proven experience building LLM-based applications • Familiarity with OpenAI, Anthropic, Hugging Face, LangChain • Strong knowledge of REST APIs, microservices, and prompt engineering • Exposure to Git, CI/CD workflows, and agile environments Nice-to-Have: • Cloud exposure (AWS, Azure, GCP) • Experience with Docker/Kubernetes • Understanding of RAG pipelines, semantic search, or fine-tuning models Interested or know someone who fits? Drop your resume or referrals in the comments or send us a message! Thanks Dan - Dan@therxcloud.com
Posted 3 weeks ago
2.0 - 4.0 years
2 - 7 Lacs
Kolkata, West Bengal, India
On-site
Key Responsibilities: Fine-tune and deploy LLMs (e.g., GPT, LLaMA, Mistral) using frameworks like Hugging Face Transformers and LangChain. Build and optimize RAG pipelines with vector databases (e.g., Pinecone, FAISS, Weaviate). Engineer prompts for structured and reliable outputs across diverse use cases such as chatbots, summarization tools, and coding assistants. Implement scalable inference pipelines; optimize for latency, throughput, and cost using quantization, distillation, and other model optimization techniques. Collaborate with product, design, and engineering teams to integrate generative AI capabilities into user-facing features. Monitor and improve model performance, accuracy, safety, and compliance in production. Ensure responsible AI practices through content filtering, output sanitization, and ethical deployment. Required Skills: Proficiency in Python and familiarity with modern machine learning tools and libraries. Hands-on experience with LLM development using Hugging Face Transformers, LangChain, or LlamaIndex. Experience building and deploying RAG pipelines, including managing embeddings and vector search. Strong understanding of transformer architectures, tokenization, and prompt engineering techniques. Comfortable working with LLM APIs (e.g., OpenAI, Anthropic, Cohere) and serving models with FastAPI, Flask, or similar frameworks. Familiarity with deploying ML systems using Docker, Kubernetes, and cloud services (AWS, GCP, Azure). Experience with model evaluation, logging, and inference pipeline troubleshooting. Nice to Have: Exposure to multimodal models (e.g., text-to-image, video generation, TTS). Experience with reinforcement learning from human feedback (RLHF) or alignment techniques. Familiarity with open-source LLMs (e.g., Mistral, Mixtral, LLaMA, Falcon) and optimization tools (LoRA, quantization, PEFT). Knowledge of LangChain agents, tool integration, and memory management. Contributions to open-source GenAI projects, public demos, or blogs in the generative AI space. Basic proficiency in frontend development (e.g., React, Next.js) for rapid prototyping.
Posted 1 month ago
7.0 - 12.0 years
25 - 30 Lacs
Bengaluru
Work from Office
What you will do Build robust backend services and APIs using Python (FastAPI, asyncio) for GenAI workflows and LLM-based systems. Develop and maintain GenAI applications using tools like LangChain, LangGraph, and Cohere, integrating them with custom APIs and data sources. Contribute to systems that enable intelligent routing of prompts, dynamic tool execution, and seamless model-data integration across multiple sources Write clean, modular code for model integration, semantic search, and multi-step agent workflows. Package and deploy applications using Docker and Kubernetes, ensuring scalability and security. Collaborate with data engineers, AI scientists, and infra teams to ship end-to-end features quickly, without sacrificing quality. What you need to succeed Must Have Strong backend development skills in Python Solid understanding of machine learning and generative AI Degree in Computer Science or Engineering Experience deploying services with Docker and Kubernetes in a cloud environment. Familiarity with LLM APIs (OpenAI, Cohere) and how to build prompt-based applications around them. Comfortable writing and debugging PySpark jobs and working with Delta Lake in Databricks. Experience working with Git workflows, CI/CD, and container-based deployments. Nice to Have Experience with LangChain, LangGraph, or other LLM orchestration frameworks. Experience building and deploying MCP Servers, AI Agents Experience with any vector database. Experience with ML FLow
Posted 1 month ago
2.0 - 4.0 years
2 - 7 Lacs
Delhi, India
On-site
Key Responsibilities: Fine-tune and deploy LLMs (e.g., GPT, LLaMA, Mistral) using frameworks like Hugging Face Transformers and LangChain. Build and optimize RAG pipelines with vector databases (e.g., Pinecone, FAISS, Weaviate). Engineer prompts for structured and reliable outputs across diverse use cases such as chatbots, summarization tools, and coding assistants. Implement scalable inference pipelines; optimize for latency, throughput, and cost using quantization, distillation, and other model optimization techniques. Collaborate with product, design, and engineering teams to integrate generative AI capabilities into user-facing features. Monitor and improve model performance, accuracy, safety, and compliance in production. Ensure responsible AI practices through content filtering, output sanitization, and ethical deployment. Required Skills: Proficiency in Python and familiarity with modern machine learning tools and libraries. Hands-on experience with LLM development using Hugging Face Transformers, LangChain, or LlamaIndex. Experience building and deploying RAG pipelines, including managing embeddings and vector search. Strong understanding of transformer architectures, tokenization, and prompt engineering techniques. Comfortable working with LLM APIs (e.g., OpenAI, Anthropic, Cohere) and serving models with FastAPI, Flask, or similar frameworks. Familiarity with deploying ML systems using Docker, Kubernetes, and cloud services (AWS, GCP, Azure). Experience with model evaluation, logging, and inference pipeline troubleshooting. Nice to Have: Exposure to multimodal models (e.g., text-to-image, video generation, TTS). Experience with reinforcement learning from human feedback (RLHF) or alignment techniques. Familiarity with open-source LLMs (e.g., Mistral, Mixtral, LLaMA, Falcon) and optimization tools (LoRA, quantization, PEFT). Knowledge of LangChain agents, tool integration, and memory management. Contributions to open-source GenAI projects, public demos, or blogs in the generative AI space. Basic proficiency in frontend development (e.g., React, Next.js) for rapid prototyping.
Posted 1 month ago
12.0 - 22.0 years
40 - 60 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Job Overview: We are looking for a GenAI Leader with 12+ years of experience and a strong background in pharma/life sciences to join our team. As a key leader, you will drive innovation in the pharma commercial and clinical space by designing cutting-edge AI solutions, leading high-performing teams, and delivering impactful results. If you have a proven track record in Generative AI, hands-on experience with models like GPT, DALL-E, and Stable Diffusion, and a passion for mentoring and thought leadership, we want to hear from you! Must have skills & competencies Enhance the go-to-market strategy by designing new and relevant solution frameworks to accelerate our clients journeys for impacting patient outcomes. Pitch for these opportunities and craft winning proposals to grow the Data Science Practice. Build and lead a team of data scientists and analysts, fostering a collaborative and innovative environment. Oversee the design and delivery of the models, ensuring projects are completed on time and meet business objectives. Engaging in consultative selling with clients to grow/deliver business. Develop and operationalize scalable processes to deliver on large & complex client engagements. Proven experience in building and productizing GenAI app at an enterprise level Ensure profitable delivery and great customer experience – design the end-to-end solution, put together the right team and help them deliver as per established processes. Build an A team – hire the required skills sets and nurture them in a supporting environment to develop strong delivery leaders for the Data Science Practice. Train and mentor staff and establish best practices and ways of working to enhance data science capabilities at Axtria. Operationalize an eco-system for continuous learning & development. Write white papers, collaborate with academia and participate in relevant speaker opportunities to continuously upgrade learning & establish Axtria’s thought leadership in this space. Research, develop, evaluate and optimize newly emerging algorithms and technologies for relevant use cases in pharma commercial & clinical space. Extensive hands-on experience with Python, R, or Julia, focusing on data science and generative AI frameworks. Expertise in working with generative models such as GPT, DALL-E, Stable Diffusion, Codex, and MidJourney for various applications. Proficiency in fine-tuning and deploying generative models using libraries like Hugging Face Transformers, Diffusers, or PyTorch Lightning. Strong understanding of generative techniques, including GANs, VAEs, diffusion models, and autoregressive models. Experience in prompt engineering, zero-shot, and few-shot learning for optimizing generative AI outputs across different use cases. Expertise in managing generative AI data pipelines, including preprocessing large-scale multimodal datasets for text, image, or code generation. Experience in leveraging APIs and SDKs for generative AI services from OpenAI, Anthropic, Cohere, and Google AI. Expertise in defining generative AI strategies, aligning them with business objectives, and implementing solutions for scalable deployment. Knowledge of real-world applications of generative AI, such as text generation, code generation, image generation, chatbots, and content creation. Awareness of ethical considerations in generative AI, including bias mitigation, data privacy, and safe deployment practices. Good to have skills & competencies Proven ability to collaborate with cross-functional teams, including product managers, data scientists, and DevOps engineers, to deliver end-to-end generative AI solutions. Proven experience in leading teams focused on generative AI projects, mentoring ML engineers, and driving innovation in AI applications. Possessing robust analytical skills to address and model intricate business needs is highly advantageous, especially for those with a background in life sciences or pharmaceuticals. Eligibility Criteria Masters/PhD in CSE/IT from Tier 1 institute Minimum 12+ years of relevant experience in building software applications in data and analytics field. We will provide– (Employee Value Proposition) Offer an inclusive environment that encourages diverse perspectives and ideas Delivering challenging and unique opportunities to contribute to the success of a transforming organization Opportunity to work on technical challenges that may impact on geographies Vast opportunities for self-development: online Axtria Institute, knowledge sharing opportunities globally, learning opportunities through external certifications Sponsored Tech Talks & Hackathons Possibility of relocating to any Axtria office for short and long-term projects Benefit package: -Health benefits -Retirement benefits -Paid time off -Flexible Benefits -Hybrid /FT Office/Remote Axtria is an equal-opportunity employer that values diversity and inclusiveness in the workplace. Who we are Axtria 14 years journey Axtria, Great Place to Work Life at Axtria Axtria Diversity
Posted 1 month ago
8.0 - 13.0 years
14 - 24 Lacs
Pune, Ahmedabad
Hybrid
Senior Technical Architect Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . 14. Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelors or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks.
Posted 2 months ago
12.0 - 22.0 years
12 - 22 Lacs
Delhi NCR, , India
On-site
We are looking for a GenAI Leader with 12+ years of experience and a strong background in pharma/life sciences to join our team. As a key leader, you will drive innovation in the pharma commercial and clinical space by designing cutting-edge AI solutions, leading high-performing teams, and delivering impactful results. If you have a proven track record in Generative AI, hands-on experience with models like GPT, DALL-E, and Stable Diffusion, and a passion for mentoring and thought leadership, we want to hear from you! Must have skills & competencies Enhance the go-to-market strategy by designing new and relevant solution frameworks to accelerate our clients journeys for impacting patient outcomes. Pitch for these opportunities and craft winning proposals to grow the Data Science Practice. Build and lead a team of data scientists and analysts, fostering a collaborative and innovative environment. Oversee the design and delivery of the models, ensuring projects are completed on time and meet business objectives. Engaging in consultative selling with clients to grow/deliver business. Develop and operationalize scalable processes to deliver on large & complex client engagements. Proven experience in building and productizing GenAI app at an enterprise level Ensure profitable delivery and great customer experience design the end-to-end solution, put together the right team and help them deliver as per established processes. Build an A team hire the required skills sets and nurture them in a supporting environment to develop strong delivery leaders for the Data Science Practice. Train and mentor staff and establish best practices and ways of working to enhance data science capabilities at Axtria. Operationalize an eco-system for continuous learning & development. Write white papers, collaborate with academia and participate in relevant speaker opportunities to continuously upgrade learning & establish Axtria's thought leadership in this space. Research, develop, evaluate and optimize newly emerging algorithms and technologies for relevant use cases in pharma commercial & clinical space. Extensive hands-on experience with Python, R, or Julia, focusing on data science and generative AI frameworks. Expertise in working with generative models such as GPT, DALL-E, Stable Diffusion, Codex, and MidJourney for various applications. Proficiency in fine-tuning and deploying generative models using libraries like Hugging Face Transformers, Diffusers, or PyTorch Lightning. Strong understanding of generative techniques, including GANs, VAEs, diffusion models, and autoregressive models. Experience in prompt engineering, zero-shot, and few-shot learning for optimizing generative AI outputs across different use cases. Expertise in managing generative AI data pipelines, including preprocessing large-scale multimodal datasets for text, image, or code generation. Experience in leveraging APIs and SDKs for generative AI services from OpenAI, Anthropic, Cohere, and Google AI. Expertise in defining generative AI strategies, aligning them with business objectives, and implementing solutions for scalable deployment. Knowledge of real-world applications of generative AI, such as text generation, code generation, image generation, chatbots, and content creation. Awareness of ethical considerations in generative AI, including bias mitigation, data privacy, and safe deployment practices. Good to have skills & competencies Proven ability to collaborate with cross-functional teams, including product managers, data scientists, and DevOps engineers, to deliver end-to-end generative AI solutions. Proven experience in leading teams focused on generative AI projects, mentoring ML engineers, and driving innovation in AI applications. Possessing robust analytical skills to address and model intricate business needs is highly advantageous, especially for those with a background in life sciences or pharmaceuticals.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough