Jobs
Interviews

8 Rag Systems Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Generative AI Azure OpenAI Prompt Engineer with 5 to 8 years of experience, you will be responsible for developing LLM-powered applications on Azure, focusing on real-world use cases such as chatbots, document summarization, decision support systems, and enterprise search through Retrieval-Augmented Generation (RAG). This hybrid role requires expertise in ML engineering, software development, and cloud infrastructure, with a significant emphasis on utilizing Azure's AI ecosystem. Key competencies to look for in candidates include proficiency in LLM Application Development, the ability to build end-to-end applications using GPT-4 or similar technologies, experience in deploying and managing models using Azure's tools and APIs, understanding of prompt engineering with the capability to differentiate between bad, good, and optimized prompts, familiarity with combining LLMs with document retrieval systems like vector databases or Azure AI Search, and knowledge of GenAI Toolkits such as LangChain, Prompt Flow, or Semantic Kernel. Candidates should also demonstrate skills in MLOps and integration by automating and monitoring AI applications using Azure Functions, Logic Apps, or Event Grid, proficiency in writing clean Python code to interact with APIs, handle JSON responses, and integrate with cloud services, as well as possess soft skills including cross-functional communication to translate business needs into technical LLM pipelines, a cost-aware mindset to optimize token usage, and awareness of security, governance, and responsible AI practices related to Azure identity and permissions. If you are interested in this role or know someone who might be a good fit, please reach out to Maithili Nayak at maithili.nayak@accionlabs.com with your detailed latest CV for further discussion and opportunities. Thank you.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Software Engineer in the Developer Experience and Productivity Engineering team at Coupa, you will play a crucial role in designing, implementing, and enhancing our sophisticated AI orchestration platform. Your primary responsibilities will revolve around architecting AI and MCP tools architecture with a focus on scalability and maintainability, developing integration mechanisms for seamless connectivity between AI platforms and MCP systems, and building secure connectors to internal systems and data sources. You will have the opportunity to collaborate with product managers to prioritize and implement features that deliver significant business value. Additionally, you will mentor junior engineers, contribute to engineering best practices, and work on building a scalable, domain-based hierarchical structure for our AI platforms. Your role will involve creating specialized tools tailored to Coupa's unique operational practices, implementing secure knowledge integration with AWS RAG and Knowledge Bases, and designing systems that expand capabilities while maintaining manageability. In this role, you will get to work at the forefront of AI integration and orchestration, tackling complex technical challenges with direct business impact. You will collaborate with a talented team passionate about AI innovation and help transform how businesses leverage AI for operational efficiency. Furthermore, you will contribute to an architecture that scales intelligently as capabilities grow, work with the latest LLM technologies, and shape their application in enterprise environments. To excel in this position, you should possess at least 5 years of professional software engineering experience, be proficient in Python and RESTful API development, and have experience in building and deploying cloud-native applications, preferably on AWS. A solid understanding of AI/ML concepts, software design patterns, system architecture, and performance optimization is essential. Additionally, you should have experience with integrating multiple complex systems and APIs, strong problem-solving skills, attention to detail, and excellent communication abilities to explain complex technical concepts clearly. Preferred qualifications include experience with AI orchestration platforms or building tools for LLMs, knowledge of vector databases, embeddings, and RAG systems, familiarity with monitoring tools like New Relic, observability patterns, and SRE practices, and experience with DevOps tools like Jira, Confluence, GitHub, or similar tools and their APIs. Understanding security best practices for AI systems and data access, previous work with domain-driven design and microservices architecture, and contributions to open-source projects or developer tools are also advantageous. Coupa is committed to providing equal employment opportunities to all qualified candidates and employees, fostering a welcoming and inclusive work environment. Decisions related to hiring, compensation, training, or performance evaluation are made fairly, in compliance with relevant laws and regulations. Please note that inquiries or resumes from recruiters will not be accepted. By applying to this position, you acknowledge that Coupa collects your application, including personal data, for managing ongoing recruitment and placement activities, as well as for employment purposes if your application is successful. You can find more information about how your application is processed, the purposes of processing, and data retention in Coupa's Privacy Policy.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an AI/ML Engineer Generative AI Specialist with over 5 years of experience, you will be expected to possess the following must-have skills: You should have a strong background in AI/ML frameworks such as PyTorch, TensorFlow, and proficient in Python programming. Your experience with AWS services, specifically AWS Bedrock and Lambda functions, will be crucial. Hands-on expertise in LangChain components (agents, chains, memories, parsers, document loaders), developing and optimizing GenAI, NLP models, and CrewAI is essential for this role. You should be capable of building and deploying RAG (Retrieval-Augmented Generation) systems, chatbots, and AI applications using OpenAI APIs, Ollama, Llama, and llamaparse. Advanced prompt engineering techniques like Chain of Thought (CoT) prompting should be within your skill set. Additionally, experience with Docker, CI/CD pipelines, YML files, and GitHub repository management is required. A strong understanding of system design principles, focusing on security, scalability, and performance optimization, is a must. Good-to-have skills include experience in responsible AI practices, knowledge of event-driven architecture, and familiarity with cloud deployments and Kubernetes. Experience with large-scale AI model training and fine-tuning would be beneficial. Your key responsibilities will involve designing and implementing AI/ML solutions using cutting-edge technologies to ensure scalability, efficiency, and alignment with business objectives. You will be responsible for developing and optimizing LangChain components, CrewAI, GenAI, and NLP models customized to specific applications. Building and deploying production-ready RAG systems, chatbots, and AI-driven applications utilizing OpenAI APIs, Ollama, Llama, and llamaparse will be part of your role. Leveraging AWS Bedrock and other AWS services to deliver high-performance AI solutions is crucial. You will need to apply prompt engineering techniques like Chain of Thought (CoT) prompting to enhance AI interactions, maintain CI/CD pipelines, manage YML files, and utilize GitHub for version control. Implementing containerization using Docker and deploying AI solutions in cloud environments will also fall under your responsibilities. Ensuring that solutions adhere to best practices in system design, focusing on security, performance, and efficiency considerations is essential. You will also need to incorporate responsible AI practices and ensure compliance with ethical AI guidelines. Collaboration with cross-functional teams to translate AI insights into impactful solutions and effective communication of complex technical concepts to both technical and non-technical stakeholders will be expected from you.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

Tata Electronics Pvt. Ltd. is a key global player in the electronics manufacturing industry, specializing in Electronics Manufacturing Services, Semiconductor Assembly & Test, Semiconductor Foundry, and Design Services. Established in 2020 by the Tata Group, the company's primary objective is to provide integrated solutions to global customers across the electronics and semiconductor value chain. We are looking for an AI Core Developer to join our R&D team in Bangalore. This role is centered around fundamental AI research, algorithm development, and model pre-training, focusing on innovation rather than application engineering. As an AI Core Developer, you will be involved in cutting-edge AI research, creating novel algorithms, and constructing foundation models from scratch. This position is ideal for individuals with a strong background in pre-training methodologies and algorithm development who aspire to contribute to core AI advancements. Your responsibilities will include developing and implementing innovative machine learning algorithms for various AI systems, designing pre-training pipelines for large models, prototyping new AI architectures, collaborating with research scientists and engineers, and contributing to technical publications. The ideal candidate should hold a Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field, with 2-4 years of hands-on experience in AI/ML development. Proficiency in Python, C/C++, knowledge of deep learning frameworks such as PyTorch and TensorFlow, and experience with model pre-training are essential requirements. Strong mathematical skills, familiarity with transformer architectures and attention mechanisms, and understanding of distributed computing are also key competencies. Preferred qualifications include advanced experience in multimodal AI systems, research contributions to top-tier AI conferences, and expertise in specific AI domains like healthcare or finance. The position is based in Bangalore, India, with a hybrid work arrangement and occasional travel for conferences and collaborations.,

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant - AI Senior Engineer! In this role you&rsquoll be leveraging Azure&rsquos advanced AI capabilities or AWS Advance Ai capability, including Azure Machine Learning , Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence,AWS Sage Maker, AWS Bedrocks to deliver scalable and efficient solutions. You will also ensure seamless integration into enterprise workflows and operationalize models with robust monitoring and optimization. Responsibilities . AI Orchestration - Design and manage AI Orchestration flow using tools such as: Prompt Flow, Or LangChain Continuously evaluate and refine models to ensure optimal accuracy, latency, and robustness in production. . Document AI and Data Extraction, Build AI-driven workflows for extracting structured and unstructured data fromLearning, receipts, reports, and other documents using Azure AI Document Intelligence, and Azure Cognitive Services. . RAG Systems - Design and implement retrieval-augmented generation (RAG) systems using vector embeddings and LLMs for intelligent and efficient document retrieval Optimize RAG workflows for large datasets and low-latency operations. . Monitoring and Optimization - Implement advanced monitoring systems using Azure Monitor, Application Insights, and Log Analytics to track model performance and system health Continuously evaluate and refine models and workflows to meet enterprise-grade SLAs for performance and reliability. . Collaboration and Documentation - Collaborate with data engineers, software developers, and DevOps teams to deliver robust and scalable AI-driven solutions Document best practices, workflows, and troubleshooting guides for knowledge sharing and scalability. Qualifications we seek in you Minimum Qualifications . Proven experience with Machine Learning, Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence, AWS Bedrock, SageMaker Proficiency in building and optimizing RAG systems for document retrieval and comparison. . Strong understanding of AI/ML concepts, including natural language processing (NLP), embeddings, model fine-tuning, and evaluation Experience in applying machine learning algorithms and techniques to solve complex problems in real-world applications Familiarity with state-of-the-art LLM architectures and their practical implementation in production environments Expertise in designing and managing Prompt Flow pipelines for task-specific customization of LLM outputs. . Hands-on experience in training LLMs and evaluating their performance using appropriate metrics for accuracy, latency, and robustness Proven ability to iteratively refine models to meet specific business needs and optimize them for production environments. . Knowledge of ethical AI practices and responsible AI frameworks. . Experience with CI/CD pipelines using Azure DevOps or equivalent tools Familiarity with containerized environments managed through Docker and Kubernetes. . Knowledge of Azure Key Vault, Managed Identities, and Azure Active Directory (AAD) for secure authentication. . Experience with PyTorch or TensorFlow. . Proven track record of developing and deploying Azure-based AI solutions for large-scale, enterprise-grade environments. . Strong analytical and problem-solving skills, with a results-driven approach to building scalable and secure systems. Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and Inviting applications for the role of S e n i o r M a n ager - AI Senior Engineer -Agentic AI! In this role, you%27ll be part of Genpact%27s transformation under GenpactNext , as we lead the shift to Agentic AI Solutions&mdashdomain-specific, autonomous systems that redefine how we deliver value to clients. You%27ll help drive the adoption of innovations like the Genpact AP Suite in finance and accounting, with more Agentic AI products set to expand across service lines. In this role you&rsquoll be leveraging Azure&rsquos advanced AI capabilities or AWS Advance Ai capability, including Azure Machine Learning , Azure OpenAI, PrompFlow , Azure Cognitive Search, Azure AI Document Intelligence,AWS Sage Maker, AWS Bedrocks to deliver scalable and efficient solutions. You will also ensure seamless integration into enterprise workflows and operationalize models with robust monitoring and optimization. Responsibilities AI Orchestration - Design and manage AI Orchestration flow using tools such as: Prompt Flow, Or LangChain Continuously evaluate and refine models to ensure optimal accuracy, latency, and robustness in production. Document AI and Data Extraction , Build AI-driven workflows for extracting structured and unstructured data fromLearning , receipts, reports, and other documents using Azure AI Document Intelligence, and Azure Cognitive Services. RAG Systems - Design and implement retrieval-augmented generation (RAG) systems using vector embeddings and LLMs for intelligent and efficient document retrieval Optimize RAG workflows for large datasets and low-latency operations. Monitoring and Optimization - I mplement advanced monitoring systems using Azure Monitor, Application Insights, and Log Analytics to track model performance and system health Continuously evaluate and refine models and workflows to meet enterprise-grade SLAs for performance and reliability. Collaboration and Documentation - Collaborate with data engineers, software developers, and DevOps teams to deliver robust and scalable AI-driven solutions Document best practices, workflows, and troubleshooting guides for knowledge sharing and scalability. Qualifications we seek in you Minimum Qualifications Proven experience with Machine Learning, Azure OpenAI, PrompFlow , Azure Cognitive Search, Azure AI Document Intelligence , AWS Bedrock, SageMaker Proficiency in building and optimizing RAG systems for document retrieval and comparison. Strong understanding of AI/ML concepts, including natural language processing (NLP ), embeddings, model fine-tuning, and evaluation Experience in applying machine learning algorithms and techniques to solve complex problems in real-world applications Familiarity with state-of-the-art LLM architectures and their practical implementation in production environments Expertise in designing and managing Prompt Flow pipelines for task-specific customization of LLM outputs. Hands-on experience in training LLMs and evaluating their performance using appropriate metrics for accuracy, latency, and robustness Proven ability to iteratively refine models to meet specific business needs and optimize them for production environments. Knowledge of ethical AI practices and responsible AI frameworks . Experience with CI/CD pipelines using Azure DevOps or equivalent tools Familiarity with containerized environments managed through Docker and Kubernetes. Knowledge of Azure Key Vault, Managed Identities, and Azure Activ e Directory (AAD) for secure authentication. Experience with PyTorch or TensorFlow. Proven track record of developing and deploying Azure-based AI solutions for large-scale, enterprise-grade environments. Strong analytical and problem-solving skills, with a results-driven approach to building scalable and secure systems. Why join Genpact Lead AI-first transformation - Build and scale AI solutions that redefine industries Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career &mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up . Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

3.0 - 4.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Key Responsibilities AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills Education: Bachelors or Masters degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.

Posted 2 months ago

Apply

8 - 13 years

20 - 35 Lacs

Bengaluru

Work from Office

Role & responsibilities You will be embedded within a team of machine learning engineers and data scientists; responsible for building and productizing generative AI and deep learning solutions. You will: Design, develop, and evaluate generative AI models for vision and data science tasks. Collaborate with cross-functional teams to integrate AI-driven solutions into business operations. Build and enhance frameworks for automation, data processing, and model deployment. Develop and deploy AI agents, including Retrieval-Augmented Generation (RAG) systems. Utilize Gen-AI tools and workflows to improve the efficiency and effectiveness of AI solutions. Conduct research and stay updated with the latest advancements in generative AI and related technologies. REQUIREMENTS: B. Tech, M. Tech or PhD in computer science, electrical engineering, statistics or math. At least 8 years of working experience in data science, computer vision, or related domain. Proven experience with building and deploying generative AI solutions. Strong programming skills in Python and solid fundamentals in computer science, particularly in algorithms, data structures, and OOP. Experience with Gen-AI tools and workflows. Proficiency in both vision-related AI and data analysis using generative AI. Experience with cloud platforms and deploying models at scale. Experience with transformer architectures and large language models (LLMs). • Familiarity with frameworks such as TensorFlow, PyTorch, and Hugging Face. • Proven leadership and team management skills. DESIRED SKILLS: • Working experience with AWS is a plus. • Knowledge of best practices in software development, including version control, testing, and continuous integration. • Working knowledge of common industry frameworks and tools around building LLMs, such as OpenAI, GPT, BERT, etc. • Experience with MLOps tools and practices for continuous deployment and monitoring of AI models.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies