Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 8.0 years
4 - 8 Lacs
Mumbai, Hyderabad, Bengaluru
Work from Office
We are looking for a skilled AI Engineer with 3 to 8 years of experience in software engineering or machine learning to design, implement, and productionize LLM-powered agents that solve real-world enterprise problems. This position is based in Kolkata. Roles and Responsibility Architect and build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark and iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Job Requirements Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go is a bonus. Hands-on experience with at least one LLM/agent framework and platform (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar).
Posted 1 week ago
8 - 12 years
25 - 30 Lacs
Bengaluru
Work from Office
About the Role As part of our tech transformation and go external approach in the space of AI and ML within GSS (Global Scaled Solutions) Org, we are looking for a Senior Program Manager Tech who will help scale the existing AI evaluations training and bench marking to the next level by establishing best practises, mentor and coach, tech transform, define quality, collaborate with other teams to deliver world class experience for the stakeholders. ---- What You Will Do ---- Primary domain responsibility is AI Engineering, specifically evaluations and fine tuning for coding. Ability to manage multiple streams and clients related to prompt engineering and evaluations. Ability to disambiguate, scope the work, establish success, quality and evaluation criteria for the team. Templatize the work so it can be replicated across streams and clients. Ability to influence and motivate the team to think out of the box Abstract and build tooling and efficiencies to improve the process. Strong program management skills - Stakeholder management, risk management, team building, program tracking and reporting Strong problem solving skills - process, governance and tooling Coach, mentor, support and groom various pocs managing multiple streams of work. Brings maturity in terms of process, people management, Tech domain aptitude, problem solving and delivery ---- What You Will Need ---- Overall 8+ years of experience with at least 3-4 years of software engineering experience, frontend/backend, 3+ years of program management experience. People management experience is bonus ---- Preferred Qualifications ---- Having worked in hyper scale and/or startup like environment Tech and problem solving skills Identify code quality (don't have to write but able to read and judge) Templates and tooling for process and efficiency Software Engineering hands on experience Exposure to AI Engineering or ML is preferred
Posted 2 months ago
6 - 10 years
8 - 12 Lacs
Bengaluru
Work from Office
Position Overview: Lead Development of Generative AI Applications: Architect and develop advanced generative AI solutions that support business objectives, ensuring high performance and scalability. Performance Tuning & Optimization: Identify bottlenecks in applications and implement strategies to improve performance. Optimize machine learning models for efficiency in production environments. Collaborate Cross-Functionally: Work closely with data scientists, product managers, and other stakeholders to gather requirements and transform them into robust technical solutions. Mentor Junior Engineers: Provide guidance and mentorship to team members on best practices in coding standards, architectural design, and machine learning techniques. Research & Innovation: Stay abreast of the latest advancements in the field of Artificial Intelligence. Propose new ideas that could lead to innovations within the organization. Deployment & Scaling Strategies: Lead the deployment process of applications on cloud platforms while ensuring they are scalable to handle increasing loads without compromising performance. Documentation & Quality Assurance: Develop comprehensive documentation for projects undertaken. Implement rigorous testing methodologies to ensure high-quality deliverables. About You: Bachelors degree in Computer Science, Engineering or a related field (Master's degree preferred). Over 6 years of professional experience working with Python programming language in an enterprise setting. Strong expertise in Machine Learning frameworks such as TensorFlow, PyTorch or similar libraries; experience with Generative Adversarial Networks (GANs) is highly desirable. Proven track record of optimizing machine learning models for performance improvements across various platforms including cloud services (AWS, Google Cloud Platform). Deep understanding of system architecture principles related to scalability and robust application design. Excellent problem-solving skills combined with strong analytical abilities. Proven leadership skills with experience mentoring junior engineers effectively.
Posted 2 months ago
7 - 12 years
9 - 14 Lacs
Chennai
Work from Office
NLP Experience:5 to 8 years of good working experience Experience with Text Classification, NER models - fine-tuning and zeroshotExperience with Huggingface, PytorchExperience with Document ProcessingPython / AI Engineering:Proficient in python - build re-usable and scalable data pipelinesNoSQL databasesData analysis and ML/DL frameworksWeb API frameworksUnit testing frameworksMicroservices / Cloud:Docker, ECS / EKSGood experience with any one cloud service provider:AWS PreferrableGeneral Requirements:Ability to handle large scale unstructured dataFlexible to learn new frameworks and keep up with latest NLP developments
Posted 3 months ago
10 - 14 years
12 - 16 Lacs
Bengaluru
Work from Office
About the Job: The Data Development Insights & Strategy (DDIS) team is seeking a Principal AI Engineer to lead the design, development, and optimization of AI model lifecycle frameworks within Red Hats OpenShift AI and RHEL AI infrastructures. As a Principal AI Engineer, you will play a key leadership role in overseeing the strategic direction of AI model deployment and lifecycle management, collaborating across teams to ensure seamless integration, scalability, and performance of mission-critical AI models. In this role, you will drive the development of innovative solutions for the AI model lifecycle, applying your deep expertise in MLOps/LLMOps, cloud computing, and distributed systems. You will be a technical leader who mentors and guides teams in collaboration with Products & Global Engineering (P&GE) and IT AI Infra to ensure efficient model deployment and maintenance in secure, scalable environments. This is an exciting opportunity for someone who wants to take a leadership role in influencing the strategic direction of Red Hat's AI innovations, driving the innovation and optimization of AI models and technologies. What you will do? Lead the design and development of scalable, efficient, and secure AI model lifecycle frameworks within Red Hats OpenShift and RHEL AI infrastructures, ensuring models are deployed and maintained with minimal disruption and optimal performance. Define and implement the strategy for optimizing AI model deployment, scaling, and integration across hybrid cloud environments (AWS, GCP, Azure), working with cross-functional teams to ensure consistent high availability and operational excellence. Spearhead the creation and optimization of CI/CD pipelines and automation for AI model deployments, leveraging tools such as Git, Jenkins, and Terraform, ensuring zero disruption during updates and integration. Champion the use of advanced monitoring tools (e.g., OpenLLMetry, Splunk, Catchpoint) to monitor and optimize model performance, responding to issues and leading the troubleshooting of complex problems related to AI and LLM models. Lead cross-functional collaboration in collaboration with Products & Global Engineering (P&GE) and IT AI Infra teams to ensure seamless integration of new models or model updates into production systems, adhering to best practices and minimizing downtime. Define and oversee the structured process for handling feature requests (RFEs), prioritization, and resolution, ensuring transparency and timely delivery of updates and enhancements. Lead and influence the adoption of new AI technologies, tools, and frameworks to ensure that Red Hat remains at the forefront of AI and machine learning advancements. Drive performance improvements, model updates, and releases on a quarterly basis, ensuring RFEs are processed and resolved within agreed-upon timeframes and driving business adoption. Oversee the fine-tuning and enhancement of large-scale models, including foundational models like Mistral and LLama, ensuring the optimal allocation of computational resources (GPU management, cost management strategies). Lead a team of engineers, mentoring junior and senior talent, fostering an environment of collaboration and continuous learning, and driving the technical growth of the team. Contribute to strategic discussions with leadership, influencing the direction of AI initiatives and ensuring alignment with broader business goals and technological advancements. What you will bring? A bachelors or masters degree in Computer Science, Data Science, Machine Learning, or a related technical field is required. Hands-on experience and demonstrated leadership in AI engineering and MLOps will be considered in lieu of formal degree requirements. 10+ years of experience in AI or MLOps, with at least 3 years in a technical leadership role managing the deployment, optimization, and lifecycle of large-scale AI models. You should have deep expertise in cloud platforms (AWS, GCP, Azure) and containerized environments (OpenShift, Kubernetes), with a proven track record in scaling and managing AI infrastructure in production. Experience optimizing large-scale distributed AI systems, automating deployment pipelines using CI/CD tools like Git, Jenkins, and Terraform, and leading performance monitoring using tools such as OpenLLMetry, Splunk, or Catchpoint. You should have a strong background in GPU-based computing and resource optimization (e.g., CUDA, MIG, vLLM) and be comfortable with high-performance computing environments. Your leadership skills will be key, as you will mentor and guide engineers while fostering a collaborative, high-performance culture. You should also have a demonstrated ability to drive innovation, solve complex technical challenges, and work cross-functionally with teams to deliver AI model updates that align with evolving business needs. A solid understanding of Agile development processes and excellent communication skills are essential for this role. Lastly, a passion for AI, continuous learning, and staying ahead of industry trends will be vital to your success at Red Hat. Desired skills: 10+ years of experience in AI, MLOps, or related fields, with a substantial portion of that time spent in technical leadership roles driving the strategic direction of AI infrastructure and model lifecycle management. Extensive experience with foundational models such as Mistral, LLama, GPT, and their deployment, tuning, and scaling in production environments. Proven ability to influence and drive AI and MLOps roadmaps, shaping technical strategy and execution in collaboration with senior leadership. In-depth experience with performance monitoring, resource optimization, and troubleshooting of AI models in complex distributed environments. Strong background in high-performance distributed systems and container orchestration, particularly in AI/ML workloads. Proven experience in guiding and mentoring engineering teams to build high-performance capabilities, fostering a culture of continuous improvement and technical innovation. As a Principal AI Engineer at Red Hat, you will have the opportunity to drive major strategic AI initiatives, influence the future of AI infrastructure, and lead a high-performing engineering team. This is a unique opportunity for a seasoned AI professional to shape the future of AI model lifecycle management at scale. If youre ready to take on a technical leadership role with a high level of responsibility and impact, we encourage you to apply.
Posted 3 months ago
10 - 15 years
12 - 17 Lacs
Bengaluru
Work from Office
About the Job: The Data Development Insights Strategy (DDIS) team at Red Hat is seeking an AI Engineering Manager to lead a talented team of AI Engineers focused on the design, deployment, and optimization of AI model lifecycle frameworks within our OpenShift AI and RHEL AI infrastructures. As an AI Engineering Manager, you will be responsible for driving the technical vision and execution of AI model lifecycle management at scale, overseeing the development and deployment of cutting-edge AI technologies while ensuring the scalability, performance, and security of mission-critical AI models. In this leadership role, you will work closely with cross-functional teams, including Products Global Engineering (PGE) and IT AI Infra teams, to drive the deployment, maintenance and optimization of AI models and infrastructure, ensuring alignment with business objectives and strategic goals. You will be tasked with managing and mentoring a high-performing team of AI Engineers, driving innovation, setting technical priorities, and fostering a collaborative and growth-oriented team culture. This is an ideal role for someone with a strong background in AI/ML, MLOps, and leadership, looking to have a significant impact on Red Hats AI strategy and innovations. What you will do Lead and manage a team of AI Engineers, providing mentorship, guidance, and fostering a culture of continuous learning, collaboration, and technical excellence. Define and execute the technical strategy for AI model lifecycle management, ensuring the scalability, security, and optimization of AI models within Red Hats OpenShift and RHEL AI infrastructures. Oversee the development, deployment, and maintenance of AI models, working with engineering teams to ensure seamless integration, minimal downtime, and high availability in production environments. Drive the implementation of automation, CI/CD pipelines, and Infrastructure as Code (IaC) practices to streamline AI model deployment, updates, and monitoring. Collaborate with cross-functional teams (PGE, IT AI Infra, etc.) to ensure that AI models and infrastructure meet evolving business needs, data changes, and emerging technology trends. Manage and prioritize the resolution of feature requests (RFEs), ensuring timely, transparent communication and effective problem resolution. Guide the optimization of large-scale models, including foundational models like Mistral and LLama, and ensure optimal computational resource management (e.g., GPU optimization, cost management strategies). Lead efforts to monitor and enhance AI model performance, using advanced tools (OpenLLMetry, Splunk, Catchpoint) to identify and resolve performance bottlenecks. Define and track key performance metrics for AI models, ensuring that model updates and releases meet business expectations and deadlines (e.g., quarterly releases, RFEs resolved within 30 days). Foster collaboration between teams to ensure that model updates and optimizations align with both business objectives and technological advancements. Promote innovation by staying up-to-date with emerging AI technologies, tools, and industry trends, and integrating these advancements into Red Hats AI infrastructure. Take ownership of the teams growth and professional development, ensuring engineers are continuously challenged and supported in their career progression. What you will bring A bachelors or masters degree in Computer Science, Data Science, Machine Learning, or a related technical field, although hands-on experience and demonstrated leadership in AI engineering and MLOps can be considered in lieu of formal academic credentials. 10+ years of experience in AI engineering, MLOps, or related fields, and at least 3 years of leadership experience, you will have a strong background in managing high-performing engineering teams and mentoring Principal and Senior Engineers. Foster a culture of technical excellence, continuous improvement, and innovation within the team. Expertise in deploying, maintaining, and optimizing AI models at scale across cloud environments such as AWS, GCP, or Azure, and containerized platforms like OpenShift or Kubernetes. Experience with AI/ML frameworks, performance monitoring, and resource optimization (e.g., CUDA, MIG, vLLM, TGI) will ensure that AI models are efficient, scalable, and high-performing. Hands-on experience with Infrastructure as Code (IaC) practices, CI/CD tools (Git, Jenkins, Terraform), and automating AI model deployment and monitoring pipelines. Strong problem-solving skills for optimizing and troubleshooting large-scale AI systems and distributed architectures. Excellent communication skills, with the ability to interact effectively with both technical and non-technical stakeholders. Desired skills: 10+ years of experience in AI, MLOps, or related fields, including 3+ years of leadership experience. Experience in managing large-scale AI infrastructure, particularly in high-performance computing environments. Deep expertise in AI model lifecycle management, from development to deployment, monitoring, and performance optimization. A strong background in cross-functional collaboration, driving alignment between business objectives, engineering teams, and technical requirements. Proven ability to innovate, set technical direction, and deliver AI infrastructure improvements at scale. As an AI Engineering Manager at Red Hat, you will have the opportunity to shape the future of AI model lifecycle management at scale, influence strategic initiatives, and drive innovation across a high-performing engineering team. If youre a dynamic leader with a passion for AI and machine learning, and want to make a significant impact on Red Hats AI infrastructure, we encourage you to apply. About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Diversity, Equity Inclusion at Red Hat Red Hats culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from diverse backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions of diversity that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email . General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 3 months ago
3 - 6 years
8 - 12 Lacs
Pune
Work from Office
A Generative AI/ AI engineer is a specialized professional responsible for developing, implementing, and optimizing Generative AI/ AI models and systems. These engineers play a crucial role in harnessing the power of AI to generate creative content, solve complex problems, and create innovative solutions Job Description - Grade Specific The role involves managing the AI (including Gen AI) projects. The role is also responsible for mentoring the junior engineer.
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
About the Job: The Data Development Insights & Strategy (DDIS) team is seeking a Senior AI Engineer to design, scale, and maintain our AI model lifecycle framework within Red Hat's OpenShift AI and RHEL AI infrastructures. As a Senior AI Engineer, you will contribute to managing and optimizing large-scale AI models, collaborating with cross-functional teams to ensure high availability, continuous monitoring, and efficient integration of new model updates, while driving innovation through emerging AI technologies. In this role, you will leverage your expertise in AI, MLOps/LLMOps, cloud computing, and distributed systems to enhance model performance, scalability and operational efficiency. You'll work in close collaboration with the Products & Global Engineering(P&GE) and IT AI Infra teams, ensuring seamless model deployment and maintenance in a secure and high-performance environment. This is an exciting opportunity to drive AI model advancements and contribute to the operational success of mission-critical applications. What you will do? Develop and maintain the lifecycle framework for AI models within Red Hats OpenShift and RHEL AI infrastructure, ensuring security, scalability and efficiency throughout the process. Design, implement, and optimize CI/CD pipelines and automation for deploying AI models at scale using tools like Git, Jenkins, and Terraform, ensuring zero disruption during updates and integration. Continuously monitor and improve model performance using tools such as OpenLLMetry, Splunk, and Catchpoint, while responding to performance degradation and model-related issues. Work closely with cross-functional teams, including Products & Global Engineering(P&GE) and IT AI Infra teams, to seamlessly integrate new models or model updates into production systems with minimal downtime and disruption. Enable a structured process for handling feature requests (RFEs), prioritization, and resolution, ensuring transparent communication and timely resolution of model issues. Assist in fine-tuning and enhancing large-scale models, including foundational models like Mistral and LLama, while ensuring computational resources are optimally allocated (GPU management, cost management strategies). Drive performance improvements, model updates, and releases on a quarterly basis, ensuring that all RFEs are processed and resolved within 30 days. Collaborate with stakeholders to align AI model updates with evolving business needs, data changes, and emerging technologies. Contribute to mentoring junior engineers, fostering a collaborative and innovative environment. What you will bring? A bachelor's or masters degree in Computer Science, Data Science, Machine Learning, or a related technical field is required. Hands-on experience that demonstrates your ability and interest in AI engineering and MLOps will be considered in lieu of formal degree requirements. Experience programming in at least one of these languages: Python, with a strong understanding of Machine Learning frameworks and tools. Experience working with cloud platforms such as AWS, GCP, or Azure, and have familiarity with deploying and maintaining AI models at scale in these environments. As a Senior AI Engineer, you will be most successful if you have experience working with large-scale distributed systems and infrastructure, especially in production environments where AI and LLM models are deployed and maintained. You should be comfortable troubleshooting, optimizing, and automating workflows related to AI model deployment, monitoring, and lifecycle management. We value a strong ability to debug and optimize model performance and automate manual tasks wherever possible. Additionally, you should be well-versed in managing AI model infrastructure using containerization technologies like Kubernetes and OpenShift, and have hands-on experience with performance monitoring tools (e.g., OpenLLMetry, Splunk, Catchpoint). We also expect you to have a solid understanding of GPU-based computing and resource optimization, with a background in high-performance computing (e.g., CUDA, vLLM, MIG, TGI, TEI). Experience working in Agile development environments. Work collaboratively within cross-functional teams to solve complex problems and drive AI model updates will be key to your success in this role. Desired skills: 5+ years of experience in AI or MLOps, with a focus on deploying, maintaining, and optimizing large-scale AI models in production. Expertise in deploying and managing models in cloud environments (AWS, GCP, Azure) and containerized platforms like OpenShift or Kubernetes. Familiarity with large-scale distributed systems and experience managing their performance and scalability. Experience with performance monitoring and analysis tools such as OpenLLMetry, Prometheus, or Splunk. Deep understanding of GPU-based deployment strategies and computational cost management. Strong experience in managing model lifecycle processes, from training to deployment, monitoring, and updates. Ability to mentor junior engineers and promote knowledge sharing across teams. Excellent communication skills, both verbal and written, with the ability to engage with technical and non-technical stakeholders. A passion for innovation and continuous learning in the rapidly evolving field of AI and machine learning.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2