Jobs
Interviews

91 Llmops Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 15.0 years

0 Lacs

karnataka

On-site

Role Overview: You are a seasoned AI / GenAI Solution Architect with 15+ years of total experience, including 5+ years of relevant expertise in AI/ML, Data Science, and GenAI architectures. Your role involves technical architecture design, Hyperscaler solutions (Azure, GCP, AWS), and end-to-end delivery of AI-driven platforms and agentic solutions. You will be responsible for solution architecture & design, proposal creation, estimations, and client engagement across large-scale AI transformation programs. Key Responsibilities: - Architect and deliver GenAI, LLM, and agentic AI solutions - Excellent understanding of Agentic Frameworks, MLOps, LLMOps - Design enterprise-level scalable AI-ready architectures - Lead proposals, estimations, and RFP/RFI responses - Partner with clients as a trusted AI advisor for transformation roadmaps - Exposure to data engineering pipelines - Build reference architectures and reusable GenAI frameworks Qualifications: - 15+ years IT experience; 5+ years in AI/ML/GenAI architecture - Strong data science, MLOps, and solutioning background - Deep expertise with Azure (priority), GCP, AWS - Proven client-facing skills: RFPs, proposals, and stakeholder management - Certifications in Azure AI / GCP ML / AWS ML (preferred) - Exposure to regulatory-compliant AI systems in BFSI, Healthcare, Manufacturing, Public Sector etc. will have added advantage Additional Details of the company: NTT DATA is a $30 billion trusted global innovator of business and technology services. They serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, they have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. They are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit their website at us.nttdata.com.,

Posted 10 hours ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior Backend Developer at our company, you will play a crucial role in refining the system architecture, ensuring seamless integration of various tech stacks, and enhancing the DevOps process. Your understanding of software paradigms and ability to design solutions for ML, MLOps, and LLMOps challenges will be key in leading your team towards successful project implementation. You will have the opportunity to work with a wide range of modern technologies and contribute to building an enterprise-grade ML platform. Responsibilities: - Act as a bridge between engineering and product teams, designing a scalable system that aligns with long-term product roadmaps. - Translate product insights into detailed engineering requirements and oversee the development process within sprint cycles. - Demonstrate expertise in solution design and documentation, ensuring high-level and low-level designs are well-defined. - Develop "Zero Defect Software" efficiently using cutting-edge tools like ChatGPT and Co-pilot. - Foster a culture of secure, instrumented, and resilient software development, focusing on high-quality, performance-driven code. - Evaluate and enhance DevOps processes for a cloud-native codebase, emphasizing API Data Contracts, OOAD, Microservices, Data Models, and Concurrency concepts. - Lead with a focus on quality, innovation, and empowering leadership. Qualifications: - 4-7 years of hands-on experience developing full-fledged Systems/Microservices using Python. - Minimum 3 years of senior engineering responsibilities and people mentorship/leadership experience. - Proficiency in object-oriented design, agile development methodologies, and cloud-native software development. - Strong skills in SQL, NoSQL, OLAP DBs, and familiarity with modern technologies. - Bonus: Understanding of Generative AI frameworks/libraries such as RAG, Langchain, LLAMAindex. Skills: - Excellent documentation skills. - Ability to make authoritative decisions, hold accountability, motivate, lead, and empower team members. - Strong independent contributor and team player. - Working knowledge of ML and MLOps concepts. To excel in this role, you should have a product mindset, care about customers, take ownership, collaborate effectively, solve problems efficiently, and align with our core values of innovation, curiosity, accountability, trust, fun, and social good.,

Posted 11 hours ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

As a Generative AI Engineer at Xebia, you will play a crucial role in designing and delivering enterprise-scale solutions for strategic clients. Your expertise in both Application Architecture and AI/ML/Generative AI will be instrumental in creating robust cloud-native applications and driving innovation through Agentic AI and emerging AI technologies. **Key Responsibilities:** - Translate client business problems into comprehensive end-to-end solution architectures. - Design cloud-native applications and AI-driven solutions using a wide range of AWS services. - Develop solution diagrams, architecture blueprints, and detailed design documents (HLD & LLD). - Architect and oversee the delivery of Generative AI and Agentic AI solutions utilizing AWS Bedrock, SageMaker, and LLMOps. - Define data pipelines and integration patterns leveraging AWS-native services. - Ensure that solutions adhere to AWS Well-Architected best practices for scalability, performance, and security. - Lead engineering teams technically, ensuring alignment from design to delivery. - Keep abreast of emerging AI/ML technologies and introduce innovative approaches to client projects. - Collaborate closely with business and IT stakeholders to align technology solutions with overarching business objectives. **Qualifications Required:** - Strong background in application architecture encompassing cloud-native design, microservices, APIs, and integration patterns. - Hands-on experience with a variety of AWS services for applications, data management, and AI/ML. - Proficiency in Generative AI, Agentic AI, and LLMOps. - Demonstrated ability to craft end-to-end solution designs (HLD/LLD) and lead technical delivery. - Experience with enterprise data platforms including data lakes, ETL processes, and analytics on AWS. - Excellent consulting, communication, and client-facing skills. In addition to the technical requirements, preferred certifications for this role include: - AWS Solution Architect Professional (or at least Associate level certification). - AWS Machine Learning Specialty certification.,

Posted 15 hours ago

Apply

10.0 - 12.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Key Responsibilities Leadership and Team Management Lead, manage, and mentor a high-performing team of data scientists, ensuring excellence in technical execution and professional development. Define, implement, and refine the data science strategy and roadmap, aligning with the organization&aposs long-term goals and evolving business needs. Cultivate a culture of technical innovation, collaboration, and continuous learning within the data science team. Strategic Collaboration and Communication Partner with senior leadership to identify and address strategic opportunities and challenges through data science initiatives. Present complex, data-driven insights and strategic recommendations to executive stakeholders, driving high-level decision-making. Develop and nurture relationships with key stakeholders across the organization, ensuring effective collaboration and alignment on data-driven projects. Innovation and Research Drive the development and deployment of state-of-the-art data science methodologies, leveraging the latest advancements in machine learning, artificial intelligence, LLMs, and computational techniques. Conduct pioneering research in emerging areas of data science, integrating new technologies and methodologies to solve complex business problems. Lead the evaluation and adoption of advanced tools, technologies, and frameworks to enhance the organization&aposs data science capabilities. Project Management and Execution Oversee the execution of complex, high-impact data science projects, ensuring they meet rigorous technical standards, deadlines, and business objectives. Ensure the scalability, reliability, and robustness of data science solutions through advanced model development, deployment, and monitoring practices. Proactively manage project risks and mitigate potential issues, ensuring successful delivery and alignment with strategic goals. Thought Leadership and Continuous Learning Serve as a thought leader in the data science field, representing the organization at industry events, conferences, and forums with high-level presentations and publications. Promote a culture of continuous learning and technical excellence within the data science team and across the organization. Contribute to the organization&aposs intellectual capital through research, white papers, and industry collaborations. Qualifications Ph.D., Masters or Bachelors in Data Science, Computer Science, Statistics, Mathematics, or a related field. 10+ years of experience in data science or a related field, with a proven track record of leading and executing high-impact, technically complex projects. Extensive expertise in advanced machine learning algorithms, deep learning, statistical modeling, LLM, and AI, with hands-on experience in cutting-edge techniques. Mastery of programming languages such as Python, with extensive experience in advanced machine learning libraries and frameworks (e.g., TensorFlow, PyTorch, Keras). Advanced skills in SQL and hands-on experience with big data technologies. Demonstrated expertise in MLOps/LLMOps practices, including model deployment, monitoring, and optimization at scale. Exceptional problem-solving, analytical, and strategic thinking skills, with the ability to tackle the most complex data challenges. Proven leadership abilities, with experience managing and developing high-performing technical teams. Outstanding communication skills, both written and verbal, with the capability to convey complex technical concepts to diverse audiences and influence executive decision-making. Recognized as a thought leader in the data science community, with a strong portfolio of publications, conference presentations, or significant contributions to the field. Ability to manage and deliver multiple high-stakes projects simultaneously, with a deep understanding of how to align data science initiatives with overarching business objectives and strategies. Our Commitment to a Culture of Inclusion & Belonging Ecolab is committed to fair and equal treatment of associates and applicants and furthering the principles of Equal Opportunity to Employment. We will recruit, hire, promote, transfer and provide opportunities for advancement based on individual qualifications and job performance in all matters affecting employment, compensation, benefits, working conditions, and opportunities for advancement. Ecolab will not discriminate against any associate or applicant for employment because of race, religion, color, creed, national origin,citizenship status, sex, sexual orientation, gender identity and expressions, genetic information, marital status, age, or disability. Show more Show less

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

bangalore, karnataka

On-site

As a highly motivated Manager in Data & Analytics with a minimum of 8 years of experience, you will be responsible for leading cross-functional teams in designing, developing, and deploying next-generation AI solutions. Your role will require a blend of technical expertise in building GenAI and Agentic AI workflows, strong project management skills, and the ability to manage client relationships and business outcomes. You will oversee the delivery of AI-driven solutions, ensure alignment with client needs, and foster a high-performance culture within the team. Key Responsibilities: - Lead, mentor, and grow a team of AI engineers, data scientists, and solution architects. - Define team goals, set performance expectations, and conduct regular feedback and evaluations. - Foster collaboration, innovation, and knowledge-sharing within the team. - Drive adoption of best practices in AI development, MLOps, and project execution. - Own end-to-end delivery of AI projectsfrom scoping and planning to deployment and monitoring. - Define project roadmaps, allocate resources, and track progress against milestones. - Manage risks, dependencies, and escalations while ensuring timely delivery. - Coordinate with cross-functional stakeholders including product, design, data, and engineering teams. - Act as the primary point of contact for clients, ensuring strong relationships and trust. - Translate client business problems into AI solutions using GenAI and agentic AI frameworks. - Prepare and deliver executive-level presentations, demos, and progress updates. - Negotiate scope, timelines, and deliverables with clients and internal stakeholders. - Shape and refine the strategy for AI adoption, focusing on generative and agentic workflows. - Identify new opportunities for AI-driven automation, augmentation, and decision support. - Evaluate emerging technologies, frameworks, and tools to enhance solution effectiveness. - Partner with business leaders to define success metrics and measure ROI of AI initiatives. - Design, implement, and optimize end-to-end machine learning and LLM pipelines, ensuring robustness, scalability, and adaptability to changing business requirements. - Drive best-in-class practices for data curation, system reliability, versioning, and automation (CI/CD, monitoring, data contracts, compliance) for ML deployments at scale. - Lead the adoption of modern LLMOps and MLOps tooling and principles such as reproducibility, observability, automation, and model governance across our cloud-based infrastructure (AWS, Snowflake, dbt, Docker/Kubernetes). - Identify and assess new business opportunities where applied machine learning or LLMs create impact. Rapidly move concepts from prototype to full production solutions enabling measurable business value. - Collaborate closely with product, engineering, analytics, and business stakeholders to define requirements, set measurable goals, and deliver production-grade outcomes. - Serve as a technical thought leader and trusted advisor, explaining complex architectural decisions in clear, actionable terms to both technical and non-technical stakeholders. - Evangelize engineering best practices and frameworks that enable high-quality delivery, system reliability, and strong data governance. - Translate analytical insights into business outcomes, helping organizations realize the ROI of AI/ML investments. Qualifications & Skills: - Education: Bachelors/Masters degree in Computer Science, Data Science, AI/ML, or related field. - Experience: 8-12 years of total experience, with at least 3-5 years in AI/ML leadership roles. - Proven expertise in Generative AI (LLMs, text-to-X, multimodal) and Agentic AI (orchestration frameworks, autonomous agents, tool integration). - Strong understanding of cloud platforms (AWS/Azure/GCP), LLMOps, and AI governance. - Demonstrated ability to manage large-scale AI/ML projects with multi-stakeholder involvement. - Exceptional communication and client engagement skills. - Experience in people leadershipcoaching, mentoring, and building high-performing teams. At EY, you will be part of a team dedicated to building a better working world by creating new value for clients, people, society, and the planet. Enabled by data, AI, and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. With services in assurance, consulting, tax, strategy, and transactions, EY teams work across a full spectrum of services in more than 150 countries and territories.,

Posted 4 days ago

Apply

12.0 - 20.0 years

36 - 72 Lacs

bengaluru

Work from Office

Responsibilities: * Design, develop & maintain AI solutions using LLMOps, Agentic AI & NLP. * Collaborate with cross-functional teams on AI platform strategy & implementation. Model development, training, deployment at scale, monitoring performance

Posted 4 days ago

Apply

12.0 - 18.0 years

50 - 60 Lacs

hyderabad

Remote

AI Architect Agentic AI Platform Role Overview We are seeking an experienced AI Solution Architect to lead the design and development of an Agentic AI platform for the Mortgage Industry. The ideal candidate will combine deep technical expertise in AI/ML and LLMOps with strong domain knowledge in financial services, mortgage processes, and regulatory compliance. This role requires defining the multi-agent orchestration framework, integration strategy, and ensuring scalability, security, and reliability of the platform. Location: Hyderabad (Preferred)/ Remote Experience: 12+ Years Key Responsibilities Architect end-to-end Agentic AI ecosystem for mortgage workflows (KYC, credit scoring, risk evaluation, loan origination, compliance checks). Design the multi-agent orchestration layer (task delegation, planning, monitoring, feedback loops). Define LLMOps/MLOps pipelines for continuous training, testing, and deployment of AI agents. Ensure platform scalability, fault tolerance, and secure integration with core banking systems, CRMs, and external APIs (credit bureaus, KYC vendors). Select and optimize AI frameworks (LangChain, LlamaIndex, Haystack, DSPy, Hugging Face). Work with compliance and legal teams to ensure GDPR, CCPA, and financial regulations (FCRA, AML/KYC, Basel III) are adhered to. Mentor technical teams and establish best practices for AI agent design and governance. Required Skills & Experience At least 5 years in AI/ML Solution architecture & proven experience architecting AI-powered enterprise platforms, preferably in financial services/mortgage. Strong knowledge of LLMs, Retrieval-Augmented Generation (RAG), fine-tuning, embeddings, and vector databases. Experience with cloud platforms (Azure AI/ML, AWS Sagemaker, GCP Vertex AI) and containerized deployments (Docker, Kubernetes). Hands-on with LangChain, OpenAI, Anthropic, Cohere, or open-source LLMs (LLaMA, Falcon, Mistral). Expertise in MLOps/LLMOps, CI/CD pipelines, observability, and model governance. Familiarity with mortgage lifecycle systems (loan origination systems, underwriting platforms, credit risk engines). Strong communication skills and ability to work cross-functionally.

Posted 5 days ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

pune, chennai, delhi / ncr

Hybrid

Your Role: As the Agentic and Generative AI Infrastructure Lead, you will architect and manage the foundational infrastructure that powers enterprise-scale AI innovation. This role is central to enabling the secure, scalable, and cost-effective deployment of Agentic AI workloads across multi-tenant and multi-service environments. You will be responsible for orchestrating the technical backbone that supports AI agents as they interact with APIs, core data systems, and hybrid cloud environments - including legacy and third-party platforms. With a strategic builders mindset, you will lead the design and implementation of infrastructure that supports the full AI lifecycle - from development and training to deployment and monitoring. Your work will ensure that AI systems are not only performant and reliable but also governed by robust access controls, cost containment strategies, and compliance safeguards. This role is pivotal in balancing innovation with operational discipline, ensuring that AI capabilities scale responsibly across the organization. Your Profile (needed): Strong technical background with a degree in management or computer science, and proficiency in SQL and Python for data analytics. At least 2 years of experience building and managing enterprise-grade cloud infrastructure (AWS, Azure, GCP), with a focus on multi-tenant and multi-service architectures. Deep understanding of AI orchestration, including data flow management, computational resource optimization, and coordination of multiple AI models. Proven ability to implement robust security and access controls, including RBAC, network policies, and isolation techniques. Experience with cost containment strategies for cloud-based AI deployments, including awareness of API call fees and Retrieval-Augmented Generation (RAG) training costs. Foundational understanding of Agentic AI concepts and their application in enterprise environments. Your Profile (nice to have): Background in technical service design. Experience with web application deployment and container orchestration for AI/ML workloads. Knowledge of deployment models (on-premise vs. public cloud) and their trade-offs. Familiarity with audit logging and cost allocation in multi-tenant environments. Awareness of the Agentic AI development lifecycle, including deployment automation and workflow optimization. Consulting experience or a client-facing background in automation or infrastructure roles

Posted 5 days ago

Apply

10.0 - 14.0 years

0 Lacs

bangalore, karnataka

On-site

As a Global Product Lead at MiQ, a global programmatic media partner, you will play a crucial role in leading and expanding a unified Data Management and LLMOps framework. Your responsibilities will include: - Leading and expanding a unified Data Management and LLMOps framework across various functions such as engineering, analytics, data science, and ad operations. - Architecting and managing automated data pipelines for onboarding, cleansing, and processing structured and unstructured data using advanced AI/agentic automation. - Overseeing the adoption and execution of Agentic AI in data workflows, defining and scaling LLMOps best practices, and ensuring seamless orchestration of AI models. - Driving the implementation and expansion of privacy-enhancing technologies like data clean rooms, federated learning, and privacy-preserving data matching. - Embedding robust data governance policies across the data and AI lifecycle, including lineage, minimization, quality controls, regulatory compliance, and AI-ethics frameworks. - Collaborating with various teams to identify and deliver on emerging data and AI opportunities, and championing MiQ's data as a product culture globally. - Providing executive reporting on goals, metrics, and progress related to AI-driven data management, cost oversight, and vendor engagement. - Fostering innovation while prioritizing commercial value, evaluating agentic or LLM-powered solutions for feasibility, risk, and business alignment. - Leading, mentoring, and inspiring a team of next-generation product managers, engineers, and applied AI specialists. - Being a recognized industry thought leader on Data Management, Agentic AI, LLMOps, and Adtech best practices internally and externally. What impact will you create - Vision for the future of AI-powered, privacy-first, and agentic data management in Adtech. - Track record of rolling out LLMOps frameworks and proven ability to balance innovation with commercial practicality. - Industry presence: Bringing new ideas from AI research, open source, and the privacy tech landscape into business practice. Your stakeholders will mainly include analysts, data scientists, and product teams at MiQ. What You'll Bring: - 10+ years of experience in data management and product leadership with expertise in agentic AI, LLMOps, or advanced AI/ML productization. - Direct experience in delivering enterprise-grade data management ecosystems and AI/LLM-powered solutions, preferably in Adtech or MarTech. - Deep familiarity with data governance for AI, including ethical frameworks, responsible innovation, synthetic data, and regulatory compliance. - Extensive hands-on experience with contemporary data and ML stack and integrating large-scale LLM pipelines. - Demonstrated ability to coach and scale high-performing teams in dynamic product settings. - Strong operational leadership and collaboration skills at C-level and in cross-functional global environments. - Up-to-date knowledge of relevant Adtech and MarTech topics. MiQ values passion, determination, agility, unity, and courage, and fosters a welcoming culture committed to diversity, equity, and inclusion. In return, you can expect a hybrid work environment, new hire orientation and training, internal and global mobility opportunities, competitive healthcare benefits, bonus and performance incentives, generous annual PTO, and employee resource groups supporting diversity and inclusion initiatives. If you have a passion for this role, apply today and be part of our team at MiQ, an Equal Opportunity Employer.,

Posted 5 days ago

Apply

8.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Platform Development and Machine Learning expert at Adobe, you will play a crucial role in changing the world through digital experiences by building scalable AI platforms and designing ML pipelines. Your responsibilities will include: - Building scalable AI platforms that are customer-facing and evangelizing the platform with customers and internal stakeholders. - Ensuring platform scalability, reliability, and performance to meet business needs. - Designing ML pipelines for experiment management, model management, feature management, and model retraining. - Implementing A/B testing of models and designing APIs for model inferencing at scale. - Demonstrating proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. - Serving as a subject matter expert in LLM serving paradigms and possessing deep knowledge of GPU architectures. - Expertise in distributed training and serving of large language models and proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. - Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results. - Reducing training and resource requirements for fine-tuning LLM and LVM models. - Having extensive knowledge of different LLM models and providing insights on the applicability of each model based on use cases. - Delivering end-to-end solutions from engineering to production for specific customer use cases. - Showcasing proficiency in DevOps and LLMOps practices, including knowledge in Kubernetes, Docker, and container orchestration. - Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Your skills matrix should include expertise in LLM such as Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama, LLM Ops such as ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI, Databases/Datawarehouse like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery, Cloud Knowledge of AWS/Azure/GCP, Dev Ops knowledge in Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus, and Cloud Certifications (Bonus) in AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert. Proficiency in Python, SQL, and Javascript is also required. Adobe is committed to creating exceptional employee experiences and values diversity. If you require accommodations to navigate the website or complete the application process, please contact accommodations@adobe.com or call (408) 536-3015.,

Posted 5 days ago

Apply

7.0 - 9.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Roles and Responsibilities: Experience in implementation of ML/AI solutions for enterprise level on Azure/AWS cloud Communicate effectively with clients on brainstorming the solutions Conduct research and development majorly on AI/ML implementations Implement RAG architecture for confidential data use case Apply deep learning techniques and machine learning experience Fine tune LLM for optimal performance Develop and implement generative AI models using state-of-the-art techniques and frameworks Collaborate with cross-functional teams to understand project requirements and design solutions that address business needs Research and experiment with new AI algorithms, architectures, and tools to improve model performance and capabilities Train, evaluate, and fine-tune generative models using large datasets to achieve desired outcomes Optimize model inference and deployment pipelines for efficiency and scalability in production environments Stay updated with the latest advancements in AI research and technology and contribute to the knowledge sharing within the team Participate in code reviews, documentation, and knowledge transfer sessions to ensure high-quality deliverables and effective collaboration Good to have - experience in healthcare and life sciences to relevant projects Leverage MLOps and LLMOps experience to optimize operations Act as a hands-on leader with GenAI and Machine Learning experience Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field 7+ years of experience in implementing ML/AI solutions for enterprise level on Azure/AWS cloud Strong programming skills in languages such as Python, with experience in data science Proficiency in generative AI techniques, including GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and related architectures Good understanding of machine learning concepts, algorithms, and methodologies Experience with data preprocessing, model training, and evaluation using large-scale datasets Excellent problem-solving skills and the ability to work independently as well as part of a team Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders Experience in healthcare and life sciences is a plus Familiarity with MLOps, AiOps, and llmops is beneficial Good to have certifications in AI/ML or related fields

Posted 6 days ago

Apply

9.0 - 14.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description Roles & responsibilities Project Planning & Execution Strategize and oversee the execution of AI application development and deployment within Audit functions. Lead requirement gathering workshops with Audit Innovation teams to define and refine project scope and deliverables. Translate business requirements into project plans and actionable tasks, ensuring alignment with timelines and resources. Team Leadership & Mentorship Manage and mentor a cross-functional team of Managers, Seniors, and Associates, promoting high-quality solution delivery. Facilitate technical and soft-skills training to continuously develop team capabilities in audit and AI domains. Foster a collaborative and forward-thinking work environment that encourages innovation and continuous improvement. Stakeholder & Communication Management Maintain strong relationships with stakeholders across member firms, ensuring clear communication and alignment on project goals. Coordinate with onshore teams to understand business needs and guide the development team in building custom solutions. Demonstrate professionalism and clarity in communication with engagement teams and leadership. Technology & Innovation Enablement Support the development of proof of concepts for audit-related AI use cases in collaboration with Developers and Solution Architects. Stay current with emerging tools and technologies relevant to AI in Audit, and promote their adoption within projects. Encourage knowledge sharing and the use of technology to enhance service delivery. Agile & Delivery Frameworks Apply Agile and Scaled Agile methodologies to manage iterative delivery of AI solutions. Lead cross-functional teams including Business Analysts and technical experts to ensure timely and impactful project outcomes. Hold relevant certifications such as PMP, PMI-ACP, or SAFe Agilist to support structured project delivery. Development Responsibilities Collaborate closely with Business Analysts to understand project requirements and provide guidance on solution design and best practices for audit-focused tools. Lead the preparation and review of documentation including business requirements, test cases, user manuals, and other project artifacts to ensure readiness for deployment. Coordinate testing and validation of new solution releases in partnership with the Product Integrity team. Ensure robust project control mechanisms, including change control, risk management, and testing protocols. Monitor and manage resources, budgets, and capital costs, ensuring alignment with project goals and stakeholder expectations. People Management Responsibilities Lead and nurture a high-performing team, fostering a culture of innovation, collaboration, and accountability. Act as a performance manager, coaching GTS professionals on career development and providing continuous feedback throughout the year. Drive team-level and GDC-wide performance development initiatives, ensuring alignment with organizational goals. Demonstrate strong stakeholder management skills, maintaining effective relationships across internal and external teams. Project Leadership Responsibilities Lead or contribute to strategic initiatives and corporate social responsibility programs within the firm. Apply proven project management skills to oversee multiple client-facing projects, ensuring timely delivery and issue resolution. Champion efficiency, compliance, and innovation across projects, identifying opportunities for improvement. Support business development efforts and lead strategic programs from planning through execution, ensuring delivery within scope, time, and budget. Ensure adherence to internal frameworks and guidelines, promoting compliance across self and team activities. Mandatory technical & functional skills PMP/CM or PMI-ACP, SAFE POPM certifications Project management skills in development projects - Working knowledge of JIRA/Confluence Experience with applying emerging technologies in solution development Project Planning and execution, Stakeholder communication Risk & Issue management Preferred Technical & Functional Skills Cloud platforms - AWS/GCP/Azure - MLOps/LLMOps awareness Key behavioral attributes/requirements Excellent communication skills Good management and employee interface People management skills with demonstrated ability to build a transparent and cohesive team Hands on experience in managing process delivery, emphasis on People and Process Management Flexible to work timings and working in different technologies and projects Collaborate with multiple stakeholders, business leaders and delivery leads, and promote effective cross-team collaboration Ensure financial targets of hours and utilization are met. Monitor achievement and implement proactive measures to mitigate deviations Ability to influence strategy and policies around Responsibilities Roles & responsibilities Project Planning & Execution Strategize and oversee the execution of AI application development and deployment within Audit functions. Lead requirement gathering workshops with Audit Innovation teams to define and refine project scope and deliverables. Translate business requirements into project plans and actionable tasks, ensuring alignment with timelines and resources. Team Leadership & Mentorship Manage and mentor a cross-functional team of Managers, Seniors, and Associates, promoting high-quality solution delivery. Facilitate technical and soft-skills training to continuously develop team capabilities in audit and AI domains. Foster a collaborative and forward-thinking work environment that encourages innovation and continuous improvement. Stakeholder & Communication Management Maintain strong relationships with stakeholders across member firms, ensuring clear communication and alignment on project goals. Coordinate with onshore teams to understand business needs and guide the development team in building custom solutions. Demonstrate professionalism and clarity in communication with engagement teams and leadership. Technology & Innovation Enablement Support the development of proof of concepts for audit-related AI use cases in collaboration with Developers and Solution Architects. Stay current with emerging tools and technologies relevant to AI in Audit, and promote their adoption within projects. Encourage knowledge sharing and the use of technology to enhance service delivery. Agile & Delivery Frameworks Apply Agile and Scaled Agile methodologies to manage iterative delivery of AI solutions. Lead cross-functional teams including Business Analysts and technical experts to ensure timely and impactful project outcomes. Hold relevant certifications such as PMP, PMI-ACP, or SAFe Agilist to support structured project delivery. Qualifications This role is for you if you have the below Educational Qualifications Project Management related academic qualifications (e.g.: MBA, PMP) Exposure to GenAI projects Work Experience Total number of years of work experience 9-14 years Relevant number of years of Project Management - Experience with Audit and Finance domain 8+ Formulated and execute strategic plans to integrate technology to drive business / process improvements Experience leading technology projects from ideation to implementation #KGS Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

The responsibilities for this role include conducting original research on generative AI models, focusing on model architectures, training methods, fine-tuning, and evaluation strategies. You will be responsible for building Proof of Concepts (POCs) with emerging AI innovations and assessing their feasibility for production. Additionally, you will design and experiment with multimodal generative models encompassing text, images, audio, and other modalities. Developing autonomous, agent-based AI systems capable of adaptive decision-making is a key aspect of the role. You will lead the design, training, fine-tuning, and deployment of generative AI systems on large datasets. Optimizing AI algorithms for efficiency, scalability, and computational performance using parallelization, distributed systems, and hardware acceleration will also be part of your responsibilities. Managing data preprocessing and feature engineering, including cleaning, normalization, dimensionality reduction, and feature selection, is essential. You will evaluate and validate models using industry-standard benchmarks, iterating to achieve target KPIs. Providing technical leadership and mentorship to junior researchers and engineers is crucial. Documenting research findings, model architectures, and experimental outcomes in technical reports and publications is also required. It is important to stay updated with the latest advancements in NLP, DL, and generative AI to foster a culture of innovation within the team. The mandatory technical and functional skills for this role include strong expertise in PyTorch or TensorFlow. Proficiency in deep learning architectures such as CNN, RNN, LSTM, Transformers, and LLMs (BERT, GPT, etc.) is necessary. Experience in fine-tuning open-source LLMs (Hugging Face, LLaMA 3.1, BLOOM, Mistral AI, etc.) is required. Hands-on knowledge of PEFT techniques (LoRA, QLoRA, etc.) is expected. Familiarity with emerging AI frameworks and protocols (MCP, A2A, ACP, etc.) is a plus. Deployment experience with cloud AI platforms like GCP Vertex AI, Azure AI Foundry, or AWS SageMaker is essential. A proven track record in building POCs for cutting-edge AI use cases is also important. In terms of the desired candidate profile, experience with LangGraph, CrewAI, or Autogen for agent-based AI is preferred. Large-scale deployment of GenAI/ML projects with MLOps/LLMOps best practices is desirable. Experience in handling scalable data pipelines (BigQuery, Synapse, etc.) is a plus. A strong understanding of cloud computing architectures (Azure, AWS, GCP) is beneficial. Key behavioral attributes for this role include a strong ownership mindset, being able to lead end-to-end project deliverables, not just tasks. The ability to align AI solutions with business objectives and data requirements is crucial. Excellent communication and collaboration skills for cross-functional projects are also important.,

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 8 Lacs

chennai

Remote

Roles and Responsibilities Design, test, and refine prompts to optimize LLM performance across diverse tasks and domains. Build and maintain a library of prompt templates and reusable components. Collaborate with engineers, analysts, and business teams to embed prompt engineering into broader AI workflows. Analyse model outputs and user feedback to iteratively improve prompt accuracy, consistency, and relevance. Monitor developments in the fields of prompt engineering, generative AI, and LLM capabilities to inform internal practices. Contribute to documentation, internal guidelines, and knowledge-sharing initiatives related to prompt engineering. Required Background Demonstrable experience with prompt writing and LLM fine-tuning. Skilled in writing effective test cases for validation and performance checks. Basic data analysis capabilities, with proficiency in tools such as SQL and Excel. Strong problem-solving skills with a keen eye for detail. Excellent communication and collaboration skills, with the ability to work across cross-functional teams. Quick learner with a proactive mindset and an interest in emerging AI technologies. Ideal Background Design and deliver LLM-powered features with robust LLMOps. Choose models (OpenAI, Google, Anthropic, open-source), build RAG pipelines with vector stores, implement evaluation harnesses, guardrails, and safety checks. Optimise prompts, latency, cost, and quality; deploy services (vLLM/Ray Serve) and track telemetry. prompt design, and practical GenAI applications. You'll play a key role in optimizing how large language models (LLMs) are used across various workflows, helping drive innovation and performance through effective prompt engineering. Generative AI: advanced expertise working with: LLMs/transformer models, AWS Bedrock, SageMaker, LangChain LLM Ops: putting models into production in the AWS ecosystem Experience productionizing ML/Gen AI services and working with complex datasets. Srong understanding of software development, algorithms, optimization, and scaling. Excellent communication and business analysis skills. Cloud engineering experience (AWS cloud services) Snowflake Software development experience to consolidating all information from thousands of tables. We want to explore using GNAI (Generative Natural Language AI) to accelerate this process, as the current manual approach is very laborious. The desired outcome is to have a senior -level resource who can deliver a proof of concept within 8-10 weeks in parttime as well. Prompt engineering & prompt templates RAG architecture (chunking, retrieval, ranking) Evaluation (groundedness, factuality) & guardrails Serving & scaling (vLLM/Ray), latency/cost tuning Data governance, privacy, and security Education- Bachelors/Masters in CS/AI/Statistics or equivalent.

Posted 1 week ago

Apply

12.0 - 14.0 years

0 Lacs

hyderabad, telangana, india

Remote

AI Architect Agentic AI Platform Role Overview We are seeking an experienced AI Solution Architect to lead the design and development of an Agentic AI platform for the Mortgage Industry. The ideal candidate will combine deep technical expertise in AI/ML and LLMOps with strong domain knowledge in financial services, mortgage processes, and regulatory compliance. This role requires defining the multi-agent orchestration framework, integration strategy, and ensuring scalability, security, and reliability of the platform. Location: Hyderabad (Preferred)/ Remote Experience: 12+ Years Key Responsibilities Architect end-to-end Agentic AI ecosystem for mortgage workflows (KYC, credit scoring, risk evaluation, loan origination, compliance checks). Design the multi-agent orchestration layer (task delegation, planning, monitoring, feedback loops). Define LLMOps/MLOps pipelines for continuous training, testing, and deployment of AI agents. Ensure platform scalability, fault tolerance, and secure integration with core banking systems, CRMs, and external APIs (credit bureaus, KYC vendors). Select and optimize AI frameworks (LangChain, LlamaIndex, Haystack, DSPy, Hugging Face). Work with compliance and legal teams to ensure GDPR, CCPA, and financial regulations (FCRA, AML/KYC, Basel III) are adhered to. Mentor technical teams and establish best practices for AI agent design and governance. Required Skills & Experience At least 5 years in AI/ML Solution architecture & proven experience architecting AI-powered enterprise platforms, preferably in financial services/mortgage. Strong knowledge of LLMs, Retrieval-Augmented Generation (RAG), fine-tuning, embeddings, and vector databases. Experience with cloud platforms (Azure AI/ML, AWS Sagemaker, GCP Vertex AI) and containerized deployments (Docker, Kubernetes). Hands-on with LangChain, OpenAI, Anthropic, Cohere, or open-source LLMs (LLaMA, Falcon, Mistral). Expertise in MLOps/LLMOps, CI/CD pipelines, observability, and model governance. Familiarity with mortgage lifecycle systems (loan origination systems, underwriting platforms, credit risk engines). Strong communication skills and ability to work cross-functionally. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

hyderabad, telangana, india

On-site

Inviting applications for the role of Lead Consultant - MLOps Engineer! In this role, you will define, implement and oversee the MLOps strategy for scalable, compliant, and cost-efficient deployment of AI/ GenAI models across the enterprise. This role combines deep DevOps knowledge, infrastructure architecture, and AI platform design to guide how teams build and ship ML models securely and reliably. You will establish governance, reuse, and automation frameworks for AI infrastructure, including Terraform-first cloud automation, multi-environment CI/CD, and observability pipelines. Responsibilities Architect secure, reusable, modular IaC frameworks across cloud and regions for MLOps Lead the development of CI/CD pipelines and standardize deployment frameworks. Design observability and monitoring systems for ML/ GenAI workloads. Collaborate with platform, data science, compliance and Enterprise Architecture teams to ensure scalable ML operations. Define enterprise-wide MLOps architecture and standards (build deploy monitor) Lead design of GenAI / LLMOps platform (Bedrock/OpenAI/Hugging Face + RAG stack) Integrate governance controls (approvals, drift detection, rollback strategies) Define model metadata standards, monitoring SLAs, and re-training workflows Influence tooling, hiring, and roadmap decisions for AI/ML delivery Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Qualifications we seek in you! Minimum Qualifications Good years of experience in DevOps or MLOps roles. Degree/qualification in Computer Science or a related field, or equivalent work experience Strong Python programming skills. Hands on experience in containerised deployment. Proficient with AWS (SageMaker, Lambda, ECR), Terraform, and Python. Demonstrated experience deploying multiple GenAI systems into production. Hands-on experience deploying 3-4 ML/ GenAI models in AWS. Deep understanding of ML model lifecycle: train test deploy monitor retrain. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Knowledge of governance and compliance policies, standards, and procedures Exposure to RAG/LLM workloads and model deployment infrastructure. Experience in developing, testing, and deploying data pipelines Preferred Qualifications/ Skills Experience designing model governance frameworks and CI/CD pipelines. Knowledge of governance and compliance policies, standards, and procedures Advanced understanding of platform security, cost optimization, and ML observability.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

Process9 is looking for a DevOps/MLOps Specialist with expertise in IAC Terraform to join their team. As a B2B and SaaS-based software company, Process9 focuses on developing middleware application platforms for language localization across various digital platforms. The company aims to revolutionize the language technology space and expand its global reach in the near future. The ideal candidate for this role will be responsible for optimizing AI/ML infrastructure, implementing Infrastructure as Code (IaC), and enhancing collaboration among different teams. Key responsibilities include managing Kubernetes clusters, designing CI/CD pipelines for ML model deployment, and automating cloud infrastructure provisioning. The candidate should have 5-8 years of experience in DevOps, MLOps, or LLMOps, along with expertise in Kubernetes, Terraform, Ansible, and cloud platforms such as AWS, GCP, or Azure. Strong scripting skills in Python and Bash for automation are required, as well as hands-on experience with CI/CD, GitOps, and ML model deployment. A qualification in B.Tech (CS & IT)/M.Tech (CS & IT)/MCA is preferred. If you are passionate about optimizing infrastructure, streamlining deployments, and collaborating with cross-functional teams in a dynamic environment, this role at Process9 could be the perfect fit for you. Apply now to be part of a leading language technology company that is reshaping the digital landscape with innovative solutions.,

Posted 1 week ago

Apply

10.0 - 20.0 years

65 - 85 Lacs

bengaluru

Hybrid

Oracle Cerner in Bangalore is seeking a Software Developer 5 with over 10 years of experience. The role requires a skilled hands-on architect proficient in Python, with expertise in Java, Agentic AI, LangGraph, Vector DBs, OpenSearch, and LLM-based systems. The ideal candidate excels in system design, mentors senior engineers, directs technical aspects, and ensures the delivery of scalable, secure, and high-performing AI services. Ready to shape the future of healthcare AI? Apply now and be a part of building the clinical agentic infrastructure for tomorrow. The OHAI Agent Engineering Team is pioneering the next generation of intelligent agent frameworks, delivering transformative solutions to elevate healthcare services and drive exceptional customer experiences. Our cutting-edge platform seamlessly integrates advanced automation, clinical intelligence, and user-centric designempowering healthcare providers to deliver proactive, personalized care and setting a new standard for operational excellence and patient delight. Join us as we shape the future of healthcare with innovation, passion, and impact. We are seeking a Consultant Member of Technical Staff with deep expertise in healthcare technologies, Agentic AI frameworks, and modern data/AI infrastructure to join our Clinical AI team. This role is pivotal in shaping our next-generation Clinical Agentic AI Platform, enabling dynamic, context-aware care pathways for healthcare providers and patients. Key Responsibilities: Architect and lead the development of clinical agent-based AI systems using LangGraph or similar frameworks. Collaborate with product and clinical informatics teams to design AI-driven care pathways and decision support systems. Define and evolve the technical roadmap, aligning with compliance, performance, and integration standards. Lead the implementation of LangGraph-based agents, integrating them with Vector Databases, OpenSearch, and other retrieval-augmented generation (RAG) pipelines. Design and guide data pipelines for LLM-based applications, ensuring real-time contextual understanding. Mentor and coach senior and mid-level engineers; foster a culture of technical excellence and innovation. Conduct design reviews, enforce code quality standards, and ensure scalable, maintainable solutions. Collaborate with platform teams on observability, deployment automation, and runtime performance optimization. Stay current with evolving AI trends, particularly in the LLMOps / Agentic AI space, and evaluate emerging tools and frameworks.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Building off Oracle's Cloud momentum, a new organization - Oracle Health & AI has been created. The team is dedicated to product development and strategy for Oracle Health, focusing on modernized, automated healthcare. This innovative venture embodies an entrepreneurial spirit, fostering an energetic and creative environment. Your contribution is vital in establishing this organization as a world-class engineering center with a strong emphasis on excellence. Within the overarching Oracle Health & AI organization, the Oracle Digital Assistant (ODA) serves as an Assistant platform enabling developers to craft skills and digital assistants using Conversational AI and Generative AI capabilities in a low-code paradigm. The ODA team has been driving conversational AI experiences for both internal Oracle teams and external customers for the past 7 years. The team is evolving rapidly to offer Generative AI solutions for healthcare and enterprise sectors, operating in an agile environment with significant visibility and support from senior leadership. We are seeking ML scientists, Data Engineers, Software Development Engineers, Product/Program Managers, and ML engineers with a robust machine learning background to develop cutting-edge solutions. Your role involves working with extensive datasets to address real-world challenges in the Healthcare domain, which will be integrated into our Health AI products. These newly formed teams based in the India Center will be engaged in New Feature Development, Software Development, ML Engineering, MLOps, LLMOps, Applied ML Sciences, and ML Research, with a primary focus on GenAI capabilities. Your involvement will be crucial in delivering innovative Generative AI-powered solutions for healthcare and enterprise clients. Qualifications: - Bachelor's Degree with relevant experience; Master's degree - Experience in real-world MLOps deploying models into production and enhancing product features - Proficiency in working within a cloud environment - 3+ Years of Experience - Strong understanding of LLMs, GenAI, Prompt Engineering, and Copilot Career Level - IC3 Responsibilities: - Proficient in Python programming and a Deep Learning framework (PyTorch, MXNet, TensorFlow, Keras) with knowledge of LLMs, LLMOps, and Generative AI to deliver as per specifications independently - Familiarity with Classification, Prediction, Recommender Systems, Time Series Forecasting, Anomaly Detection, Optimization, Graph ML, NLP, and hands-on experience in developing ML models using these techniques - Ability to utilize existing libraries and design algorithms from scratch, create data pipelines, and feature engineering pipelines for robust model building - Leveraging ML Sciences depth, programming skills, and mathematical understanding to provide advanced ML solutions for problem-solving in the Healthcare domain - Lead the selection of methods and techniques to drive solutions - Exhibit strong program management skills to multitask efficiently - Mentor and lead junior data scientists effectively - Contribute to peer reviews, team learning, achieve product objectives, and establish best practices for the organization - Extensive experience in LLM Systems, LLMOps, familiarity with Prompt Engineering, and RAG systems,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

jaipur, rajasthan

On-site

As an experienced and innovative AI Team Lead at Vaibhav Global Ltd (VGL), your role will involve leading a team of AI/ML engineers and data scientists in deploying and managing cutting-edge AI/ML models at scale. You will be responsible for overseeing the development, training, and deployment of machine learning models using Azure ML, implementing MLOps pipelines for continuous integration and monitoring of AI solutions, and leading initiatives in LLMOps for deploying and fine-tuning large language models (LLMs) at scale. Your key responsibilities will include setting clear goals and defining KPIs for your team, fostering a culture of innovation and collaboration, designing robust AI architectures that integrate with cloud services, ensuring operational excellence in AI lifecycle management, and collaborating with cross-functional teams to align AI solutions with organizational goals. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or related fields, along with 6+ years of experience in AI/ML, including at least 2 years in a leadership role. You should have proven expertise in deploying models using Azure ML, hands-on experience with MLOps frameworks and LLMOps tools, strong programming skills in languages like Python or R, proficiency in cloud platforms, and knowledge of CI/CD tools and containerization. This position offers a competitive compensation package ranging from 20,00,000 to 25,00,000 Yearly, along with perks such as great responsibility, work-life balance, and a culture of openness and flexibility that encourages professional growth. You will report directly to the IT - Delivery Head of VGL and work onsite in Jaipur, Rajasthan. If you are a dynamic leader with a passion for AI and a track record of delivering impactful solutions, we invite you to join our team and contribute to our mission of delivering joy and becoming the value leader in electronic retailing of jewelry and lifestyle products at Vaibhav Global Ltd (VGL).,

Posted 2 weeks ago

Apply

5.0 - 7.0 years

4 - 8 Lacs

pune, maharashtra, india

On-site

This is a highly specialized, hands-on role focused on building and deploying generative AI solutions. It requires someone who can not only work with AI models but also integrate them seamlessly and securely into existing enterprise software. Generative AI & LLM Integration: The core responsibility is integrating Gemini models into corporate platforms like Slack and Confluence. This involves hands-on development, prompt engineering, and the deployment of large language models (LLMs) in a production environment. AI Orchestration & MLOps: A key part of the job is building the infrastructure that makes the AI work. This includes managing orchestration logic , setting up embedding pipelines , and ensuring all components, from the prompt to the data retrieval, work together smoothly. Vector Databases & Data Engineering: You must be proficient with vector databases (like Pinecone or Weaviate) and understand the process of creating embeddings from structured and unstructured data. This is crucial for enabling the AI to retrieve relevant information from a company's internal documentation. API & System Integration: The role requires strong technical skills to connect various platforms. You'll need to set up API authentication and role-based access controls to ensure the AI assistants can securely access data from systems like Looker and Confluence. Agile Development: You will be working in a sprint-based Agile environment, so familiarity with concepts like daily standups, sprint demos, and user acceptance testing is essential for managing projects and meeting deadlines.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

chandigarh

On-site

As an AI Engineer, your primary responsibility will be to architect, develop, and deploy advanced AI solutions that incorporate Machine Learning, Generative AI, NLP, and LLMs. It is crucial to stay updated on the latest AI advancements, actively researching and integrating emerging trends and technologies such as LLMOps, Large Model Deployments, LLM Security, and Vector Databases. Your role will also involve streamlining data modeling processes to automate tasks, enhance data preparation, and facilitate data exploration for optimizing business outcomes. Collaboration with cross-functional teams, including business units, accounts teams, researchers, and engineers, will be essential to translate business requirements into actionable AI solutions. You must exhibit expertise in responsible AI practices, ensuring fairness, transparency, and interpretability in all models. Identifying and mitigating potential risks related to AI and LLM development and deployment, while emphasizing data trust and security, will also be part of your responsibilities. Moreover, you will contribute to the professional development of the AI team by mentoring engineers, fostering knowledge sharing, and promoting a culture of continuous learning. This role is based in a lab environment and requires hands-on, fast-paced, and high-intensity work. The ideal candidate should be proactive, adaptable, and comfortable working in a dynamic and demanding setting. To qualify for this position, you should have a minimum of 2 years of hands-on experience in developing and deploying AI solutions, with a proven track record of success. A Master's degree in Computer Science, Artificial Intelligence, or a related field (or equivalent experience) is required. Proficiency in Machine Learning, NLP, Generative AI, and LLMs, including their architectures, algorithms, and training methodologies, is essential. Additionally, you should have an understanding of LLMOps principles, Prompt Engineering, In-Context Training, LangChain, and Reinforcement Learning. Familiarity with best practices for large model deployment, monitoring, management, and scalability is crucial. Experience with Azure Cloud services, strong communication, collaboration, problem-solving abilities, and a commitment to ethical AI practices and security standards are also necessary. Proficiency in deep learning frameworks and languages such as Azure ML platform, Python, PyTorch, etc., along with hands-on experience with ML frameworks, libraries, and third-party ML models, is expected. Expertise in building solutions using AI/ML/DL open-source tools and libraries, strong analytical and problem-solving skills, the ability to write optimized and clear code, and address complex technical challenges effectively are important qualities for this role. Being self-motivated, a fast learner, with a proactive approach to learning new technologies, proficiency in data analysis and troubleshooting skills, and experience in building AI/ML/DL solutions for NLP/text applications, with familiarity in reinforcement learning being advantageous, are also required. A minimum of 2 years of experience on AI/ML/DL projects, with specialization or certification in Artificial Intelligence being a plus, and good knowledge of Azure AI/Cognitive Services tools are additional qualifications for this position.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

hyderabad, bengaluru, delhi / ncr

Work from Office

About the Role We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications. In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions. You will also be expected to apply modern LLMOps practices, handle schema-constrained generation, optimize cost and latency trade-offs, mitigate hallucinations, and ensure robust safety, personalization, and observability across GenAI systems. Responsibilities Generative AI Pipeline Development Design and implement scalable and modular pipelines for data ingestion, transformation, and orchestration across GenAI workloads. Manage data and model flow across LLMs, embedding services, vector stores, SQL sources, and APIs. Build CI/CD pipelines with integrated prompt regression testing and version control. Use orchestration frameworks like LangChain or LangGraph for tool routing and multi-hop workflows. Monitor system performance using tools like Langfuse or Prometheus. Data and Document Ingestion Develop systems to ingest unstructured (PDF, OCR) and structured (SQL, APIs) data. Apply preprocessing pipelines for text, images, and code. Ensure data integrity, format consistency, and security across sources. AI Service Integration Integrate external and internal LLM APIs (OpenAI, Claude, Mistral, Qwen, etc.). Build internal APIs for smooth backend-AI communication. Optimize performance through fallback routing to classical or smaller models based on latency or cost budgets. Use schema-constrained prompting and output filters to suppress hallucinations and maintain factual accuracy. Retrieval-Augmented Generation (RAG) Pipelines Build hybrid RAG pipelines using vector similarity (FAISS/Qdrant) and structured data (SQL/API). Design custom retrieval strategies for multi-modal or multi-source documents. Apply post-retrieval ranking using DPO or feedback-based techniques. Improve contextual relevance through re-ranking, chunk merging, and scoring logic. LLM Integration and Optimization Manage prompt engineering, model interaction, and tuning workflows. Implement LLMOps best practices: prompt versioning, output validation, caching (KV store), and fallback design. Optimize generation using temperature tuning, token limits, and speculative decoding. Integrate observability and cost-monitoring into LLM workflows. Backend Services Ownership Design and maintain scalable backend services supporting GenAI applications. Implement monitoring, logging, and performance tracing. Build RBAC (Role-Based Access Control) and multi-tenant personalization. Support containerization (Docker, Kubernetes) and autoscaling infrastructure for production. Required Skills and Qualifications Education Bachelors or Masters in Computer Science, Artificial Intelligence, Machine Learning, or related field. Experience 5+ years of experience in AI/ML engineering with end-to-end pipeline development. Hands-on experience building and deploying LLM/RAG systems in production. Strong experience with public cloud platforms (AWS, Azure, or GCP). Technical Skills Proficient in Python and libraries such as Transformers, SentenceTransformers, PyTorch. Deep understanding of GenAI infrastructure, LLM APIs, and toolchains like LangChain/LangGraph. Experience with RESTful API development and version control using Git. Knowledge of vector DBs (Qdrant, FAISS, Weaviate) and similarity-based retrieval. Familiarity with Docker, Kubernetes, and scalable microservice design. Experience with observability tools like Prometheus, Grafana, or Langfuse. Generative AI Specific Skills Knowledge of LLMs, VAEs, Diffusion Models, GANs. Experience building structured + unstructured RAG pipelines. Prompt engineering with safety controls, schema enforcement, and hallucination mitigation. Experience with prompt testing, caching strategies, output filtering, and fallback logic. Familiarity with DPO, RLHF, or other feedback-based fine-tuning methods. Soft Skills Strong analytical, problem-solving, and debugging skills. Excellent collaboration with cross-functional teams: product, QA, and DevOps. Ability to work in fast-paced, agile environments and deliver production-grade solutions. Clear communication and strong documentation practices. Preferred Qualifications Experience with OCR, document parsing, and layout-aware chunking. Hands-on with MLOps and LLMOps tools for Generative AI. Contributions to open-source GenAI or AI infrastructure projects. Knowledge of GenAI governance, ethical deployment, and usage controls. Experience with hallucination suppression frameworks like Guardrails.ai, Rebuff, or Constitutional AI. Experience and Shift Shift Time: 2:30 PM to 11:30 PM IST Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Agoda is an online travel booking platform that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, as well as flights, activities, and more. As part of Booking Holdings and based in Asia, Agoda boasts a diverse team of 7,100+ employees from 95+ nationalities across 27 markets. The work environment at Agoda is characterized by diversity, creativity, and collaboration, fostering innovation through a culture of experimentation and ownership, ultimately enhancing the customer experience of exploring the world. At Agoda, our purpose is to bridge the world through travel, believing that travel enables people to enjoy, learn, and experience the wonders of the world, bringing individuals and cultures closer together while fostering empathy, understanding, and happiness. Our team comprises skilled, driven individuals from around the globe united by a passion to make a positive impact. Leveraging innovative technologies and strong partnerships, we strive to make travel accessible and rewarding for everyone. The LLMOps Platform at Agoda is dedicated to enabling scalable, secure, and efficient deployment of Large Language Models (LLMs) and generative AI solutions. Our platform, built on robust cloud and on-premises infrastructure, empowers teams to experiment, deploy, and monitor LLM-powered applications with confidence and agility. The LLMOps team bridges the gap between advanced AI research and real-world production systems, ensuring reliable, responsible, and scalable delivery of LLMs. Emphasizing best practices in model lifecycle management, data privacy, prompt engineering, and continuous improvement, we enable users to unlock the full potential of generative AI. In this role, you will have the opportunity to lead the design, development, and implementation of LLMOps solutions, collaborating with data scientists, ML engineers, and product teams to build scalable pipelines for LLM fine-tuning, evaluation, and inference. You will develop and maintain tools for prompt management, versioning, and automated evaluation of LLM outputs, ensuring responsible AI practices by integrating data anonymization, bias detection, and compliance monitoring into LLM workflows. Additionally, you will work on monitoring and management tools to ensure the reliability and performance of on-premises and cloud machine learning platforms, while also engaging with stakeholders across the organization to understand generative AI needs and deliver impactful LLMOps solutions. To succeed in this role, you should have at least 5 years of experience in LLMOps, MLOps, Software Engineering, or a related field, along with strong programming skills in a modern language such as Python, Kotlin, Scala, or Java. Excellent communication and collaboration skills are essential, as well as a commitment to code quality, simplicity, and performance. Experience with LLMOps platforms and tools, familiarity with prompt engineering and LLM evaluation frameworks, knowledge of data privacy practices, and expertise in containerization, orchestration, DevOps, CI/CD, and scalable API development are highly beneficial. A passion for engineering challenges in generative AI and experience in designing and building LLMOps infrastructure will further contribute to your success in this role. Agoda offers a flexible hybrid work arrangement with remote work options, generous annual leave, exclusive accommodation discounts, wellness benefits, opportunities for career growth, competitive compensation, and comprehensive health benefits. Additionally, a relocation package is provided for employees and their families, including full visa sponsorship, airfare support, temporary accommodation, and assistance with household goods and pet relocation. Agoda is an Equal Opportunity Employer, and we will keep your application on file for future vacancies. You can request to have your details removed from the file at any time. For more information, please refer to our privacy policy.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

We are seeking a versatile and highly motivated Full Stack Developer to join our fast-paced Generative AI team. As a Full Stack Developer, you will be responsible for designing, developing, and maintaining intelligent, scalable web applications that integrate with cutting-edge AI models and ecosystems such as OpenAI, Hugging Face, and Azure OpenAI. Your role will involve collaborating with cross-functional teams including AI researchers, backend engineers, and UI/UX designers to bring AI-powered features to production, prioritizing performance, reliability, and innovation. Your key responsibilities will include: - Designing, developing, and maintaining full-stack applications with Generative AI capabilities - Building robust APIs and microservices to support AI workflows - Developing responsive and accessible front-end applications using React.js, Next.js, or Angular - Collaborating with the AI/ML team to embed LLMs, vector databases, and custom agents into scalable systems - Optimizing performance and security for production-grade AI integrations - Contributing to prompt engineering, agent orchestration logic, and human-in-the-loop user experience design - Writing comprehensive tests to ensure code quality and stability - Participating in sprint planning, code reviews, and architectural design discussions - Documenting architecture, flows, and model integrations for cross-team clarity Required Skills & Qualifications: - 3+ years of Full Stack Developer experience with Node.js, Python, or .NET - Proficiency in JavaScript/TypeScript and modern front-end frameworks - Deep understanding of RESTful API and GraphQL design principles - Working knowledge of vector databases like Pinecone, Weaviate, FAISS - Experience with cloud platforms and containerization - Comfortable with SQL and NoSQL databases - Familiarity with AI/ML tools & libraries such as OpenAI API, LangChain, Hugging Face, Azure OpenAI - Strong communication, analytical thinking, and team collaboration skills Preferred Skills: - Experience with Azure AI tools and services - Exposure to LLMOps and AI lifecycle tools - Familiarity with agent-based AI frameworks - Understanding of authentication flows and observability tooling - Practical experience with CI/CD pipelines Bonus Points For: - Contributions to open-source AI projects Join us to work on the cutting-edge of AI + software engineering, collaborate with a world-class team, enjoy flexible working hours tailored for US time zone collaboration, and drive innovation at the intersection of LLMs, UX, and backend architecture. Experience building internal tooling or AI agent dashboards and understanding of enterprise compliance and security standards in AI integration are additional advantages. This is a full-time position with a Monday to Friday schedule and in-person work location.,

Posted 2 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies