Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
jaipur, rajasthan
On-site
We are searching for a skilled and adaptable Data Engineer with proficiency in PySpark, Apache Spark, and Databricks, combined with knowledge in analytics, data modeling, and Generative AI/Agentic AI solutions. This position suits individuals who excel at the convergence of data engineering, AI systems, and business insights, contributing to impactful programs with clients. Your responsibilities will include designing, constructing, and enhancing distributed data pipelines utilizing PySpark, Apache Spark, and Databricks to cater to both analytics and AI workloads. You will also be tasked with supporting RAG pipelines, embedding generation, and data pre-processing for LLM applications. Additionally, creating and maintaining interactive dashboards and BI reports using tools like Power BI, Tableau, or Looker for business stakeholders and consultants will be part of your role. Furthermore, your duties will involve conducting adhoc data analysis to facilitate data-driven decision-making and rapid insight generation. You will be expected to develop and sustain robust data warehouse schemas, star/snowflake models, and provide support for data lake architecture. Integration with and support for LLM agent frameworks like LangChain, LlamaIndex, Haystack, or CrewAI for intelligent workflow automation will also fall under your purview. In addition, ensuring data pipeline monitoring, cost optimization, and scalability in cloud environments (Azure/AWS/GCP) will be important aspects of your work. Collaboration with cross-functional teams, including AI scientists, analysts, and business teams to drive use-case delivery, is key. Lastly, maintaining robust data governance, lineage, and metadata management practices using tools such as Azure Purview or DataHub will also be part of your responsibilities.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a leading financial services and healthcare technology company based on revenue, SS&C is headquartered in Windsor, Connecticut, and has 27,000+ employees in 35 countries. Some 20,000 financial services and healthcare organizations, from the world's largest companies to small and mid-market firms, rely on SS&C for expertise, scale, and technology. We are looking for an experienced AI Developer with proficiency in Python, Generative AI (GenAI) incorporating Retrieval-Augmented Generation (RAG), Robotic Process Automation (RPA), and advanced document intelligence utilizing OCR and LLMs. In this role, you will be responsible for developing AI-driven solutions that extract valuable insights from intricate documents, whether digital or scanned, by utilizing tools such as Tesseract, Hugging Face Transformers, or similar technologies. Your primary focus will be on creating end-to-end automation and AI integration for intelligent document processing, enhancing decision-making capabilities and optimizing workflow efficiencies throughout the organization. Key Responsibilities: - Develop and implement GenAI solutions with RAG pipelines to facilitate intelligent querying and summarization of document repositories. - Extract and organize data from complex documents (e.g., PDFs, Images) using a combination of OCR engines (e.g., Tesseract) and AI-based document and vision language models (DiNO, SmolVLM, etc.). - Incorporate OCR+LLM pipelines into business applications for processing scanned forms, contracts, and other unstructured documents. - Automate repetitive, document-centric tasks using RPA tools (e.g., Blue Prism, etc.). - Design and manage Python-based workflows to coordinate document ingestion, extraction, and LLM-powered processing. - Collaborate across various teams including product, data, and operations to deliver scalable, AI-enhanced document automation solutions. - Ensure model performance, compliance, and audit readiness for all document-handling workflows. Required Qualifications: - Minimum of 4 years of hands-on programming experience with Python. - Demonstrated expertise in building RAG-based GenAI applications using tools like LangChain, LlamaIndex, or equivalent. - Proficiency in OCR tools (e.g., Tesseract, PaddleOCR) and transformer-based document models. - Experience working with LLMs for document understanding, summarization, and Q&A. - Proficient in RPA development utilizing platforms like Blueprism, etc. - Knowledge of vector databases (e.g., FAISS, Pinecone) and embeddings for semantic retrieval. - Strong understanding of REST APIs, JSON, and data integration workflows. Please note that unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services.,
Posted 1 week ago
3.0 - 6.0 years
5 - 9 Lacs
Ahmedabad, Vadodara
Work from Office
We are hiring an experienced AI Engineer / ML Specialist with deep expertise in Large Language Models (LLMs), who can fine-tune, customize, and integrate state-of-the-art models like OpenAI GPT, Claude, LLaMA, Mistral, and Gemini into real-world business applications. The ideal candidate should have hands-on experience with foundation model customization, prompt engineering, retrieval-augmented generation (RAG), and deployment of AI assistants using public cloud AI platforms like Azure OpenAI, Amazon Bedrock, Google Vertex AI, or Anthropics Claude. Key Responsibilities: LLM Customization & Fine-Tuning Fine-tune popular open-source LLMs (e.g., LLaMA, Mistral, Falcon, Mixtral) using business/domain-specific data. Customize foundation models via instruction tuning, parameter-efficient fine-tuning (LoRA, QLoRA, PEFT), or prompt tuning. Evaluate and optimize the performance, factual accuracy, and tone of LLM responses. AI Assistant Development Build and integrate AI assistants/chatbots for internal tools or customer-facing applications. Design and implement Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Haystack, or OpenAI Assistants API. Use embedding models, vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB), and cloud AI services. Must have experience of finetuning, and maintaining microservices or LLM driven databases. Cloud Integration Deploy and manage LLM-based solutions on AWS Bedrock, Azure OpenAI, Google Vertex AI, Anthropic Claude, or OpenAI API. Optimize API usage, performance, latency, and cost. Secure integrations with identity/auth systems (OAuth2, API keys) and logging/monitoring. Evaluation, Guardrails & Compliance Implement guardrails, content moderation, and RLHF techniques to ensure safe and useful outputs. Benchmark models using human evaluation and standard metrics (e.g., BLEU, ROUGE, perplexity). Ensure compliance with privacy, IP, and data governance requirements. Collaboration & Documentation Work closely with product, engineering, and data teams to scope and build AI-based solutions. Document custom model behaviors, API usage patterns, prompts, and datasets. Stay up-to-date with the latest LLM research and tooling advancements. Required Skills & Qualifications: Bachelors or Masters in Computer Science, AI/ML, Data Science, or related fields. 3-6+ years of experience in AI/ML, with a focus on LLMs, NLP, and GenAI systems. Strong Python programming skills and experience with Hugging Face Transformers, LangChain, LlamaIndex. Hands-on with LLM APIs from OpenAI, Azure, AWS Bedrock, Google Vertex AI, Claude, Cohere, etc. Knowledge of PEFT techniques like LoRA, QLoRA, Prompt Tuning, Adapters. Familiarity with vector databases and document embedding pipelines. Experience deploying LLM-based apps using FastAPI, Flask, Docker, and cloud services. Preferred Skills: Experience with open-source LLMs: Mistral, LLaMA, GPT-J, Falcon, Vicuna, etc. Knowledge of AutoGPT, CrewAI, Agentic workflows, or multi-agent LLM orchestration. Experience with multi-turn conversation modeling, dialogue state tracking. Understanding of model quantization, distillation, or fine-tuning in low-resource environments. Familiarity with ethical AI practices, hallucination mitigation, and user alignment. Tools & Technologies: Category Tools & Platforms LLM Frameworks Hugging Face, Transformers, PEFT, LangChain, LlamaIndex, Haystack LLMs & APIs OpenAI (GPT-4, GPT-3.5), Claude, Mistral, LLaMA, Cohere, Gemini, Azure OpenAI Vector Databases FAISS, Pinecone, Weaviate, ChromaDB Serving & DevOps Docker, FastAPI, Flask, GitHub Actions, Kubernetes Deployment Platforms AWS Bedrock, Azure ML, GCP Vertex AI, Lambda, Streamlit Monitoring Prometheus, MLflow, Langfuse, Weights & Biases
Posted 1 week ago
4.0 - 6.0 years
18 - 22 Lacs
Pune
Work from Office
We are looking for a GenAI/ML Engineer to design, develop, and deploy cutting-edge AI/ML models and Generative AI applications . This role involves working on large-scale enterprise use cases, implementing Large Language Models (LLMs) , building Agentic AI systems , and developing data ingestion pipelines . The ideal candidate should have hands-on experience with AI/ML development , Generative AI applications , and a strong foundation in deep learning , NLP , and MLOps practices. Key Responsibilities Design, develop , and deploy AI/ML models and Generative AI applications for various enterprise use cases. Implement and integrate Large Language Models (LLMs) using frameworks such as LangChain , LlamaIndex , and RAG pipelines . Develop Agentic AI systems capable of multi-step reasoning and autonomous decision-making . Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval techniques. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to deploy AI solutions and enhance the AI stack. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training , serving , and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong experience with LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficiency in Python and key ML/DL libraries : TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion , indexing , and retrieval pipelines for real-world enterprise use cases. Hands-on experience with Azure cloud services and Databricks . Proven track record in designing CI/CD pipelines and using MLOps tools like MLflow , DVC , or Kubeflow . Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills , with the ability to explain complex AI concepts to non-technical stakeholders. Ability to collaborate effectively in agile , cross-functional teams . A growth mindset , eager to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases such as FAISS , Pinecone , or Weaviate . Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools . Understanding of AI security , responsible AI , and model governance . Role Dimensions Design and implement innovative GenAI applications to address complex business problems. Work on large-scale, complex AI solutions in collaboration with cross-functional teams. Take ownership of the end-to-end AI pipeline , from model development to deployment and monitoring. Success Measures (KPIs) Successful deployment of AI and Generative AI applications . Optimization of data pipelines and model performance at scale. Contribution to the successful adoption of AI-driven solutions within enterprise use cases. Effective collaboration with cross-functional teams, ensuring smooth deployment of AI workflows. Competency Alignment AI/ML Development : Expertise in building and deploying scalable and efficient AI models. Generative AI : Strong hands-on experience in Generative AI , LLMs , and RAG frameworks. MLOps : Proficiency in designing and maintaining CI/CD pipelines and implementing MLOps practices . Cloud Platforms : Experience with Azure and Databricks for AI model training and serving.
Posted 1 week ago
6.0 - 9.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Note: 1. Immediate to 30 days serving notice period 2.Who are available for face to face and video can apply Please add more profile for LLM engineer for weekend drive, below is the mandatory skills which delivery is looking for: 5+ years of relevant experience in Python , AI and machine learning - 2+ years of relevant experience in Gen AI LLM Hands-on experience with at least 1 end-to-end GenAI project Worked with LLMs such as GPT, Gemini, Claude, LLaMA, etc LLM skills: RAG, LangChain, Transformers, TensorFlow, PyTorch, spaCy Experience with REST API integration (e.g. FastAPI, Flask) Proficient in prompt types: zero-shot, few-shot, chain-of-thought - Knowledge of model training, fine-tuning, and deployment workflows LLMOPs - atleast 1 cloud (AZURE/AWS) , GITHUB , Docker/Kubernetes , CICD Pipeline - Comfortable with embedding models and vector databases (e.g. FAISS, Pinecone)
Posted 1 week ago
5.0 - 10.0 years
50 - 60 Lacs
Bengaluru
Work from Office
Job Title: AI/ML Architect GenAI, LLMs & Enterprise Automation Location: Bangalore Experience: 8+ years (including 4+ years in AI/ML architecture on cloud platforms) Role Summary We are seeking an experienced AI/ML Architect to define and lead the design, development, and scaling of GenAI-driven solutions across our learning and enterprise platforms. This is a senior technical leadership role where you will work closely with the CTO and product leadership to architect intelligent systems powered by LLMs, RAG pipelines, and multi-agent orchestration. You will own the AI solution architecture end-to-endfrom model selection and training frameworks to infrastructure, automation, and observability. The ideal candidate will have deep expertise in GenAI systems and a strong grasp of production-grade deployment practices across the stack. Must-Have Skills AI/ML solution architecture experience with production-grade systems Strong background in LLM fine-tuning (SFT, LoRA, PEFT) and RAG frameworks Experience with vector databases (FAISS, Pinecone) and embedding generation Proficiency in LangChain, LangGraph , LangFlow, and prompt engineering Deep cloud experience (AWS: Bedrock, ECS, Lambda, S3, IAM) Infra automation using Terraform, CI/CD via GitHub Actions or CodePipeline Backend API architecture using FastAPI or Node.js Monitoring & observability using Langfuse, LangWatch, OpenTelemetry Python, Bash scripting, and low-code/no-code tools (e.g., n8n) Bonus Skills Hands-on with multi-agent orchestration frameworks (CrewAI, AutoGen) Experience integrating AI/chatbots into web, mobile, or LMS platforms Familiarity with enterprise security, data governance, and compliance frameworks Exposure to real-time analytics and event-driven architecture You’ll Be Responsible For Defining the AI/ML architecture strategy and roadmap Leading design and development of GenAI-powered products and services Architecting scalable, modular, and automated AI systems Driving experimentation with new models, APIs, and frameworks Ensuring robust integration between model, infra, and app layers Providing technical guidance and mentorship to engineering teams Enabling production-grade performance, monitoring, and governance
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Remote
Crew AI Engineer Remote Contractual-6 months Job Description : We are looking for people with strong python skills, knowledge of multi agent frameworks like Crew AI, knowledge of RAG concepts mandatory. Good conceptual knowledge of LLM concepts Langraph and Langchain Required Skills : 5-8 years of experience in AI/ML or automation engineering. Strong hands-on experience with CrewAI or other LLM orchestration frameworks like LangChain, AutoGen, or Semantic Kernel. Proficiency in Python, including experience with async programming and API integration. Deep understanding of LLMs (OpenAI, Anthropic, Mistral, etc.) and prompt engineering. Familiarity with vector databases (e.g., Pinecone, FAISS, Chroma) and embeddings. Experience building and deploying production-ready agent-based solutions. Strong problem-solving skills and ability to translate business requirements into technical implementations.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Senior Engineer (GenAI & Prompt Engineering) Company : Xebia | Job Type : Fulltime| Location : Hybrid Chennai, Bangalore, Hyderabad, Pune, Jaipur, Bhopal, Gurugram (India) Job Summary Xebia is hiring an experienced GenAI Engineer to lead initiatives embedding LLMs into internal DevOps and Engineering workflows. This is a hands-on opportunity for AI engineers with strong Python and prompt engineering skills. Key Responsibilities Design and optimize LLM prompts for contextual accuracy Build RAG architectures using FAISS, Pinecone, Weaviate , etc. Integrate LLMs into CI/CD pipelines, dashboards, internal tools Use LangChain, LlamaIndex, HuggingFace, Haystack frameworks Fine-tune LLMs (OpenAI, Claude, Cohere, Mistral) for enterprise use cases Ensure compliance, security, and performance of AI systems Collaborate across teams to identify automation opportunities Required Skills & Experience 6-10 years total experience with 2+ years in GenAI/LLM/NLP systems Strong in Python, vector DBs, embedding techniques Hands-on with LLM APIs , GenAI libraries, and NLP concepts Prior experience building contextual AI agents or CI/CD automation using LLMs Nice to Have Familiarity with multi-modal models , DevSecOps, Kubernetes Experience with developer-facing GenAI use cases (log triage, changelogs, copilots) Other Details Mode : Hybrid (3 days a week in-office) Locations : Chennai / Bangalore / Hyderabad / Pune / Jaipur / Bhopal / Gurugram Notice Period : Immediate to max 2 weeks Contract Role To Apply Send the following details to vijay.s@xebia.com : Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location Notice Period / Last Working Day Primary Skills LinkedIn Profile Keywords : GenAI, Prompt Engineering, RAG, LangChain, Python, FAISS, LLM, LangChain, Vector DB, HuggingFace, LlamaIndex, CI/CD, Automation, DevOps, AI, NLP
Posted 2 weeks ago
5.0 - 9.0 years
0 - 0 Lacs
karnataka
On-site
You will be responsible for building and interpreting machine learning models on real business data from the SigView platform, such as Logistic Regression, Boosted trees (Gradient boosting), Random Forests, and Decision Trees. Your tasks will include identifying data sources, integrating multiple sources or types of data, and applying data analytics expertise within a data source to develop methods to compensate for limitations and extend the applicability of the data. Moreover, you will be expected to extract data from relevant data sources, including internal systems and third-party data sources, through manual and automated web scrapping. Your role will involve validating third-party metrics by cross-referencing various syndicated data sources and determining the numerical variables to be used in the same form as they are from the raw datasets, categorized into buckets, and used to create new calculated numerical variables. You will perform exploratory data analysis using PySpark to finalize the list of compulsory variables necessary to solve the business problem and transform formulated problems into implementation plans for experiments by applying appropriate data science methods, algorithms, and tools. Additionally, you will work with offshore teams post data preparation to identify the best statistical model/analytical solution that can be applied to the available data to solve the business problem and derive actionable insights. Your responsibilities will also include collating the results of the models, preparing detailed technical reports showcasing how the models can be used and modified for different scenarios in the future to develop predictive insights. You will develop multiple reports to facilitate the generation of various business scenarios and provide features for users to generate scenarios. Furthermore, you will be interpreting the results of tests and analyses to develop insights into formulated problems within the business/customer context and provide guidance on risks and limitations. Acquiring and using broad knowledge of innovative data analytics methods, algorithms, and tools, including Spark, Elasticsearch, Python, Databricks, Azure, Power BI, Azure Cloud services, LLMs-Gen AI, and Microsoft Suite will be crucial for success in this role. This position may involve telecommuting and requires 10% travel nationally to meet with clients. The minimum requirements for this role include a Bachelor's Degree in Electronics Engineering, Computer Engineering, Data Analytics, Computer Science, or a related field plus five (5) years of progressive experience in the job offered or related occupation. Special skill requirements for this role include applying statistical methods to validate results and support strategic decisions, building and interpreting advanced machine learning models, using various tools such as Python, Scikit-Learn, XGBoost, Databricks, Excel, and Azure Machine Learning for data preparation and model validation, integrating diverse data sources using data analytics techniques, and performing data analysis and predictive model development using AI/ML algorithms. Your mathematical knowledge in Statistics, Probability, Differentiation and Integration, Linear Algebra, and Geometry will be beneficial. Familiarity with Data Science libraries such as NumPy, SciPy, and Pandas, Azure Data Factory for data pipeline design, NLTK, Spacy, Hugging Face Transformers, Azure Text Analytics, OpenAI, Word2Vec, and BERT will also be advantageous. The base salary for this position ranges from $171,000 to $190,000 per annum for 40 hours per week, Monday to Friday. If you have any applications, comments, or questions regarding the job opportunity described, please contact Piyush Khemka, VP, Business Operations, at 111 Town Square Pl., Suite 1203, Jersey City, NJ 07310.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a highly motivated Mid-Level AI Engineer to join our growing AI team. Your main responsibility will be to develop intelligent applications using Python, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) systems. Working closely with data scientists, backend engineers, and product teams, you will build and deploy AI-powered solutions that provide real-world value. Your key responsibilities will include designing, developing, and optimizing applications utilizing LLMs such as GPT, LLaMA, and Claude. You will also be tasked with implementing RAG pipelines to improve LLM performance using domain-specific knowledge bases and search tools. Developing and maintaining robust Python codebases for AI-driven solutions will be a crucial part of your role. Additionally, integrating vector databases like Pinecone, Weaviate, and FAISS, as well as embedding models for information retrieval, will be part of your daily tasks. You will work with APIs, frameworks like LangChain and Haystack, and various tools to create scalable AI workflows. Collaboration with product and design teams to define AI use cases and deliver impactful features will also be a significant aspect of your job. Conducting experiments to assess model performance, retrieval relevance, and system latency will be essential for continuous improvement. Staying up-to-date with the latest research and advancements in LLMs, RAG, and AI infrastructure is crucial for this role. To be successful in this position, you should have at least 3-5 years of experience in software engineering or AI/ML engineering, with a strong proficiency in Python. Experience working with LLMs such as OpenAI and Hugging Face Transformers is required, along with hands-on experience in RAG architecture and vector-based retrieval techniques. Familiarity with embedding models like SentenceTransformers and OpenAI embeddings is also necessary. Knowledge of API design, deployment, performance optimization, version control (e.g., Git), containerization (e.g., Docker), and cloud platforms (e.g., AWS, GCP, Azure) is expected. Preferred qualifications include experience with LangChain, Haystack, or similar LLM orchestration frameworks. Understanding NLP evaluation metrics, prompt engineering best practices, knowledge graphs, semantic search, and document parsing pipelines will be beneficial. Experience deploying models in production, monitoring system performance, and contributing to open-source AI/ML projects are considered advantageous for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We are seeking a dedicated Data Scientist with a strong background in Natural Language Processing (NLP) and expertise in Large Language Models (LLMs). As a part of our team, you will play a crucial role in the development, optimization, and implementation of open-source and API-based LLMs to address real-world challenges. Your primary responsibilities will revolve around constructing resilient GenAI pipelines, innovative internal tools, and customer-centric applications. This position offers you a remarkable chance to be at the forefront of Artificial Intelligence advancements and make significant contributions to the evolution of intelligent systems through the utilization of Retrieval-Augmented Generation (RAG) frameworks, vector databases, and real-time inference APIs. Your responsibilities will include fine-tuning and enhancing open-source LLMs tailored to specific business sectors, building and managing RAG pipelines utilizing tools like LangChain, FAISS, and ChromaDB, creating LLM-powered APIs for applications like chatbots, Q&A systems, summarization, and classification, as well as designing effective prompt templates and implementing chaining strategies to augment LLM performance across diverse contexts. To excel in this role, you must possess a deep understanding of NLP principles and advanced deep learning techniques for text data, hands-on experience with LLM frameworks like Hugging Face Transformers or OpenAI APIs, familiarity with tools such as LangChain, FAISS, and ChromaDB, proficiency in developing REST APIs for machine learning models, proficiency in Python along with expertise in libraries such as PyTorch or TensorFlow, and a solid grasp of data structures, embedding techniques, and vector search systems. Desirable qualifications include prior experience in LLM fine-tuning and evaluation, exposure to cloud-based ML deployment on platforms like AWS, GCP, or Azure, and a background in information retrieval, question answering, or semantic search. If you are passionate about generative AI and eager to work with cutting-edge NLP and LLM technologies, we are excited to connect with you.,
Posted 2 weeks ago
2.0 - 4.0 years
12 - 15 Lacs
Gurugram
Work from Office
Responsibilities: * Develop AI/ML models using Python, FAISS & FastAPI. * Optimize model performance through LLM techniques. * Collaborate with cross-functional teams on project delivery. Provident fund
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, building, and deploying scalable NLP/ML models for real-world applications. Your role will involve fine-tuning and optimizing Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. You will work with transformer-based architectures such as BERT, GPT, LLaMA, and T5, and develop GenAI applications using frameworks like LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Writing clean, efficient, and testable Python code will be a crucial part of your tasks. Collaboration with data scientists, software engineers, and stakeholders to define AI-driven solutions will also be an essential aspect of your work. Additionally, you will evaluate model performance and iterate rapidly based on user feedback and metrics. The ideal candidate should have a minimum of 3 years of experience in Python programming with a strong understanding of ML pipelines. A solid background and experience in NLP, including text preprocessing, embeddings, NER, and sentiment analysis, are required. Proficiency in ML libraries such as scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, and spaCy is essential. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases like FAISS and ChromaDB, will be beneficial. Strong problem-solving and communication skills are highly valued, along with the ability to learn new tools and work both independently and collaboratively in a fast-paced environment. Attention to detail and accuracy is crucial for this role. Preferred skills include theoretical knowledge or experience in Data Engineering, Data Science, AI, ML, RPA, or related domains. Certification in Business Analysis or Project Management from a recognized institution is a plus. Experience in working with agile methodologies such as Scrum or Kanban is desirable. Additional experience in deep learning and transformer architectures and models, prompt engineering, training LLMs, and GenAI pipeline preparation will be advantageous. Practical experience in integrating LLM models like ChatGPT, Gemini, Claude, etc., with context-aware capabilities using RAG or fine-tuning models is a plus. Knowledge of model evaluation and alignment, as well as metrics to calculate model accuracy, is beneficial. Data curation from sources for RAG preprocessing and development of LLM pipelines is an added advantage. Proficiency in scalable deployment and logging tooling, including skills like Flask, Django, FastAPI, APIs, Docker containerization, and Kubeflow, is preferred. Familiarity with Lang Chain, LlamaIndex, vLLM, HuggingFace Transformers, LoRA, and a basic understanding of cost-to-performance tradeoffs will be beneficial for this role.,
Posted 2 weeks ago
2.0 - 4.0 years
12 - 15 Lacs
Pune
Work from Office
Lead and scale Django backend features, mentor 2 juniors, manage deployments, and ensure best practices. Expert in Django, PostgreSQL, Celery, Redis, Docker, CI/CD, and vector DBs. Own architecture, code quality, and production stability.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are a talented and passionate RAG (Retrieval-Augmented Generation) Engineer with strong Python development skills, joining our AI/ML team in Bengaluru, India. Your role involves working on cutting-edge NLP solutions that integrate information retrieval techniques with large language models (LLMs). The ideal candidate will have experience with vector databases, LLM frameworks, and Python-based backend development. In this position, your responsibilities will include designing and implementing RAG pipelines that combine retrieval mechanisms with language models, developing efficient and scalable Python code for LLM-based applications, integrating with vector databases like Pinecone, FAISS, Weaviate, and more. You will fine-tune and evaluate the performance of LLMs using various prompt engineering and retrieval strategies, collaborating with ML engineers, data scientists, and product teams to deliver high-quality AI-powered features. Additionally, you will optimize system performance and ensure the reliability of RAG-based applications. To excel in this role, you must possess a strong proficiency in Python and experience in building backend services/APIs, along with a solid understanding of NLP concepts, information retrieval, and LLMs. Hands-on experience with at least one vector database, familiarity with Hugging Face Transformers, LangChain, LLM APIs, and experience in prompt engineering, document chunking, and embedding techniques are essential. Good knowledge of working with REST APIs, JSON, and data pipelines is required. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Data Science, or a related field, experience with cloud platforms like AWS, GCP, or Azure, exposure to tools like Docker, FastAPI, or Flask, and an understanding of data security and privacy in AI applications.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You are looking for a visionary Data Science Manager with expertise in Generative AI and Retrieval-Augmented Generation (RAG) to lead AI initiatives from both technical and business perspectives. In this role, you will lead a team of data scientists and ML engineers, design Generative AI models, develop statistical models, and integrate knowledge retrieval systems to enhance performance. Your responsibilities will include mentoring the team, designing scalable AI/ML solutions, implementing Generative AI models, and developing statistical models for forecasting and segmentation. You will also be responsible for integrating databases and retrieval systems, ensuring operational excellence in MLOps, and collaborating with various teams to identify high-impact use cases for GenAI. To qualify for this role, you should have a Masters in Computer Science or related fields, 10+ years of data science experience with 2+ years in GenAI initiatives, proficiency in Python and key libraries, and a strong foundation in statistical analysis and predictive modeling. Experience in cloud platforms, vector databases, and MLOps is essential, along with a background in sectors like legal tech, fintech, retail, or health tech. If you have a proven track record in building and deploying LLMs, RAG systems, and search solutions, along with a knack for influencing product roadmaps and executive strategy, this role is perfect for you. Your ability to translate complex AI concepts into actionable strategies and present findings to non-technical audiences will be crucial in driving AI/ML adoption and contributing to the company's innovation roadmap.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for owning the full ML stack that is capable of transforming raw dielines, PDFs, and e-commerce images into a self-learning system that can read, reason about, and design packaging artwork. This includes building data-ingestion & annotation pipelines for SVG/PDF to JSON conversion, designing and modifying model heads using technologies such as LayoutLM-v3, CLIP, GNNs, and diffusion LoRAs, training & fine-tuning on GPUs, as well as shipping inference APIs and evaluation dashboards. Your daily tasks will involve close collaboration with packaging designers and a product manager, establishing you as the technical authority on all aspects of deep learning within this domain. Your key responsibilities will be divided into three main areas: **Area Tasks:** - Data & Pre-processing (40%): Writing robust Python scripts for parsing PDF, AI, SVG files, extracting text, colour separations, images, and panel polygons. Implementing tools like Ghostscript, Tesseract, YOLO, and CLIP pipelines. Automating synthetic-copy generation for ECMA dielines and maintaining vocabulary YAMLs & JSON schemas. - Model R&D (40%): Modifying LayoutLM-v3 heads, building panel-encoder pre-train models, adding Graph-Transformer & CLIP-retrieval heads, and running experiments, hyper-param sweeps, ablations to track KPIs such as IoU, panel-F1, colour recall. - MLOps & Deployment (20%): Packaging training & inference into Docker/SageMaker or GCP Vertex jobs, maintaining CI/CD, experiment tracking, serving REST/GraphQL endpoints, and implementing an active-learning loop for designer corrections. **Must-Have Qualifications:** - 5+ years of Python experience and 3+ years of deep-learning experience with PyTorch, Hugging Face. - Hands-on experience with Transformer-based vision-language models and object-detection pipelines. - Proficiency in working with PDF/SVG tool-chains, designing custom heads/loss functions, and fine-tuning pre-trained models on limited data. - Strong knowledge of Linux, GPU, graph neural networks, and relational transformers. - Proficient in Git, code review discipline, and writing reproducible experiments. **Nice-to-Have:** - Knowledge of colour science, multimodal retrieval, diffusion fine-tuning, or packaging/CPG industry exposure. - Experience with vector search tools, AWS/GCP ML tooling, and front-end technologies like Typescript/React. You will own a tool stack including DL frameworks like PyTorch, Hugging Face Transformers, torch-geometric, parsing/CV tools, OCR/detectors, retrieval tools like CLIP/ImageBind, and MLOps tools such as Docker, GitHub Actions, W&B or MLflow. In the first 6 months, you are expected to deliver a data pipeline for converting ECMA dielines and PDFs, a panel-encoder checkpoint, an MVP copy-placement model, and a REST inference service with a designer preview UI. You will report to the Head of AI or CTO and collaborate with a front-end engineer, a product manager, and two packaging-design SMEs.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Scientist specializing in Natural Language Processing (NLP) and Large Language Models (LLMs), you will play a crucial role in designing, fine-tuning, and deploying cutting-edge open-source and API-based LLMs to address real-world challenges. Your primary focus will be on creating robust GenAI pipelines, innovative internal tools, and engaging client-facing applications. You will have the exciting opportunity to work at the forefront of AI technology, contributing to the advancement of intelligent systems through the utilization of Retrieval-Augmented Generation (RAG) frameworks, vector databases, and real-time inference APIs. Your responsibilities will include fine-tuning and optimizing open-source LLMs for specific business domains, constructing and managing RAG pipelines using tools like LangChain, FAISS, and ChromaDB, as well as developing LLM-powered APIs for diverse applications such as chat, Q&A, summarization, and classification. Additionally, you will be tasked with designing effective prompt templates and implementing chaining strategies to enhance LLM performance across various contexts. To excel in this role, you should possess a strong foundation in NLP fundamentals and deep learning techniques for text data, hands-on experience with LLM frameworks like Hugging Face Transformers or OpenAI APIs, and familiarity with tools such as LangChain, FAISS, and ChromaDB. Proficiency in developing REST APIs to support ML models, expertise in Python programming with knowledge of libraries like PyTorch or TensorFlow, and a solid grasp of data structures, embedding techniques, and vector search systems are also essential. Preferred qualifications include prior experience in LLM fine-tuning and evaluation, exposure to cloud-based ML deployment (AWS, GCP, Azure), and a background in information retrieval, question answering, or semantic search. If you are passionate about generative AI and eager to contribute to the latest developments in NLP and LLMs, we are excited to connect with you.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
delhi
On-site
As a Lead Data Scientist at our company, you will play a crucial role in our AI/ML team by leveraging your deep expertise in Generative AI, large language models (LLMs), and end-to-end ML engineering. Your responsibilities will involve designing and developing intelligent systems using advanced NLP techniques and modern ML practices. You will be a key player in building and optimizing ML pipelines and AI systems across various domains, as well as designing and deploying RAG architectures and intelligent chatbots. Collaboration with cross-functional teams to integrate AI components into scalable applications will be essential, along with providing technical leadership, conducting code reviews, and mentoring junior team members. You will drive experimentation with prompt engineering, agentic workflows, and domain-driven designs, while ensuring best practices in testing, clean architecture, and model reproducibility. To excel in this role, you must possess expertise in AI/ML, including Machine Learning, NLP, Deep Learning, and Generative AI (GenAI). Proficiency in working with the LLM stack, such as GPT, Chatbots, Prompt Engineering, and RAG, is required. Strong programming skills in Python, familiarity with essential libraries like Pandas, NumPy, and Scikit-learn, and experience with architectures like Agentic AI, DDD, TDD, and Hexagonal Architecture are essential. You should be comfortable with tooling and deployment using Terraform, Docker, REST/gRPC APIs, and Git, and have experience working on cloud platforms like AWS, GCP, or Azure. Familiarity with AI coding tools like Copilot, Tabnine, and hands-on experience with distributed training in NVIDIA GPU-enabled environments are necessary. A proven track record of managing the full ML lifecycle from experimentation to deployment is crucial for success in this position. Additionally, experience with vector databases, knowledge of GenAI frameworks like LangChain and LlamaIndex, contributions to open-source GenAI/ML projects, and skills in performance tuning of LLMs and custom models are considered advantageous. If you are passionate about leveraging AI technologies to deliver real-world solutions, we are excited to discuss how you can contribute to our cutting-edge AI/ML team.,
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Senior Manager - Senior Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are seeking a highly experienced Senior data scientist with 8+ years of expertise in machine learning, focusing on NLP, Generative AI, and advanced LLM ecosystems. This role demands leadership in designing and deploying scalable AI systems leveraging the latest advancements such as Google ADK, Agent Engine, and Gemini LLM. You will spearhead building real-time inference pipelines and agentic AI solutions that power complex, multi-user applications with cutting-edge technology. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 7+ years in ML engineering, applied AI, or senior data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Practical knowledge of LLM inference scaling with tools like vLLM, Groq, Triton Inference Server, and Google ADK. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization. Exposure to event-driven architectures or streaming pipelines (Kafka, Redis).
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Lead Assistant Manager - Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are looking for a motivated Data Scientist with 5+ years of experience in machine learning and data science, focusing on NLP and Generative AI. You will contribute to the design, development, and deployment of AI solutions centered on Large Language Models (LLMs) and agentic AI technologies, including Google ADK, Agent Engine, and Gemini. This role involves working closely with senior leadership to build scalable, real-time inference systems and intelligent applications. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 5+ years in ML engineering, applied AI, or data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization.
Posted 3 weeks ago
7.0 - 12.0 years
0 - 0 Lacs
Indore, Bengaluru
Work from Office
Required Skills & Experience: 4+ years of experience in penetration testing, red teaming or offensive security. 1+ years working with AI/ML or LLM-based systems. Deep familiarity with LLM architectures (e.g., GPT, Claude, Mistral, LLaMA) and pipelines (e.g., LangChain, Haystack, RAG-as-a-Service). Strong understanding of embedding models, vector databases (Pinecone, Weaviate, FAISS), and API-based model deployments. Experience with adversarial ML, secure inference, and data integrity in training pipelines. Experience with red team infrastructure and tooling such as Cobalt Strike, Mythic, Sliver, Covenant, and custom payload development. Proficient in scripting languages such as Python, PowerShell, Bash or Go.
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant - AI/ML Seeking an experienced GenAI/ML Engineer to integrate LLM APIs, build AI-driven applications, optimize model performance, and deploy AI services at scale. The ideal candidate has expertise in Python-based AI development, LLM orchestration, cloud deployment, and enterprise AI integration. Major focus should be at Gemni as CVS is GCP shop. Responsibilities . AI Application Development - Build and maintain Python-based AI services using LangChain, and CrewAI. Implement RAG-based retrieval and Agentic AI workflows. . LLM Integration & Optimization - Integrate Gemni, OpenAI, Azure OpenAI APIs. Optimize API calls using temperature, reduce hallucinations using embedding-based retrieval (FAISS, Pinecone). . Model Evaluation & Performance Tuning - Assess AI models using Model Scoring, fine-tune embeddings, and enhance similarity search for retrieval-augmented applications. . API & Microservices Development - Design scalable RESTful APIs services. Secure AI endpoints using OAuth2, JWT authentication, and API rate limiting. . Cloud Deployment & Orchestration - Deploy AI-powered applications using AWS Lambda, Kubernetes, Docker, CI/CD pipelines. Implement LangChain for AI workflow automation. . Agile Development & Innovation - Work in Scrum teams, estimate tasks accurately, and contribute to incremental AI feature releases. Qualifications we seek in you! Minimum Qualifications . BE /B.Tech/M.Tech/MCA . Experience in AI/ML: PyTorch, TensorFlow, Hugging Face, Pinecone . Experience in LLMs & APIs: OpenAI, LangChain, CrewAI . Experience in Cloud & DevOps: AWS, Azure, Kubernetes, Docker, CI/CD . Experience in Security & Compliance: OAuth2, JWT, HIPAA Preferred qualifications . Experience in AI/ML, LLM integrations, and enterprise cloud solutions . Proven expertise in GenAI API orchestration, prompt engineering, and embedding retrieval . Strong knowledge of scalable AI architectures and security best practices Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
7.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Hiring a Senior GenAI Engineer with 712 years of experience in Python, Machine Learning, and Large Language Models (LLMs) for a 6-month engagement based in Gurugram. This hands-on role involves building intelligent systems using Langchain and RAG, developing agent workflows, and defining technical roadmaps. The ideal candidate will be proficient in LLM architecture, prompt engineering, vector databases, and cloud platforms (AWS, Azure, GCP). The position demands strong collaboration skills, a system design mindset, and a focus on production-grade AI/ML solutions.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough