Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 14.0 years
15 - 20 Lacs
Noida
Work from Office
Position Summary The Principal AI Architect is responsible for leading the design and implementation of advanced AI solutions and strategic architecture. Working closely with technology leaders from across our global client community, you will be their senior Trusted Advisor for their AI-enabled transformation journey This role demands deep understanding of AI and related technologies, running in Edge, on-prem and Public Cloud environments. Acting at the forefront of our industry you will be fully conversant with Generative AI, and its impact at both the individual employee and strategic organisational level. The ideal candidate will be an established thought-leader, with solid architectural and engineering credentials. Working ahead of industry trends, deeply passionate about technology-enabled business transformation and demonstrating a strong innovation-led posture. As a thought leader, you will interact frequently with CxO level clients, AI industry leaders, provide expert opinions, and contribute to HCLs strategic vision. Key Responsibilities Technical & Engineering Leadership Design comprehensive AI solution and technology architecture, integrating latest AI technology developments into world-class solutions. Lead high-level architectural discussions with clients, providing expert guidance on best practices for AI implementations across AI PC, Edge, Data Centre and Public Cloud environments. Ensure solutions align with modern best practices across the full spectrum of platforms and environments. Deep understanding across GPU/NPU, Cognitive Infrastructure, Application and Copilot/agent domains. Contribute to HCLs thought leadership in the AI & Cloud domains with a deep understanding of opensource technologies (e.g., Kubernetes, OPEA) and partner technologies. Collaborate on joint technical projects with global partners, including Google, Microsoft, AWS, NVIDIA, IBM, Red Hat, Intel, and Dell. Service Delivery & Innovation Architect innovative AI solutions from ideation to MVP, rapidly enabling genuine business value. Optimize AI and cloud architectures to meet client requirements, balancing efficiency, accuracy and effectiveness. Assess and review existing complex solutions and recommend architectural improvements to transform applications with latest AI technologies. Drive the adoption of cutting-edge GenAI technologies spearheading initiatives that push the boundaries of AI integration across the full spectrum of environments. Thought Leadership and Client Engagement Provide expert architectural and strategy guidance to clients on incorporating Generative AI into their business and technology landscape. Conduct workshops, briefings, and strategic dialogues to educate clients on AI benefits and applications, establishing strong, trust-based relationships. Act as a trusted advisor, contributing to technical projects with a strong focus on technical excellence and on-time delivery. Author whitepapers, blogs, and speak at industry events, maintaining a visible presence as a thought leader in AI and associated technologies. Collaboration and Customer Engagement Engage with multiple customers simultaneously, providing high-impact consultative relationships. Work closely with internal teams and global partners to ensure seamless collaboration and knowledge sharing across projects. Maintain a hands-on technical credibility, staying ahead of industry trends and mentoring others in the organization. Mandatory Skills & Experience Experience: 10+ years architecture design • 7+ years in software engineering. Technologies: Professional-level expertise in Public Cloud environments (AWS, Azure, Google Cloud). Demonstrable coding proficiency with Python, Java or Go languages. AI Expertise: Advanced machine learning algorithms, GenAI models (e.g., GPT, BERT, DALL-E, GEMINI), NLP techniques. Working familiarity with Copilot solutions, in both software engineering and office productivity domains. Communication: Exemplary verbal and written communication skills. Project Methodologies: Agile and Scrum project management. Desired Skills & Experience Knowledge of GenAI operations (LLMOps), experience Governing AI models in production environments. Proficiency in data engineering for AI, including data preprocessing, feature engineering, and pipeline creation. Expertise in AI model fine-tuning and evaluation, with a focus on improving performance for specialized tasks. Copilot design, engineering and extensions. Knowledgeable about Responsible AI, including governance and ethics. Bias mitigation, with experience in implementing strategies to ensure fair and unbiased AI solutions. Deep Learning Frameworks (TensorFlow, PyTorch) Innovation and Emerging Technology Trends Strategic AI Vision and Road mapping Enthusiastic about working in a fast-paced environment using the latest technologies, and passionate about HCLs dynamic and high-energy Lab culture. Verifiable Certification Recognized Professional certification from Google, Microsoft or AWS in an AI and/or Cloud-related domain. Soft Skills and Behavioural Competencies Exemplary communication and leadership skills, capable of inspiring teams and making strategic decisions that align with business goals. Demonstrates a strong customer orientation, innovative problem-solving abilities, and effective cross-cultural collaboration. Expert at driving organizational change and fostering a culture of innovation.
Posted 3 months ago
10.0 - 14.0 years
18 - 20 Lacs
Noida
Work from Office
Position Summary This is a highly visible role that requires a perfect combination of deep technical credibility, strategic acumen and demonstrable leadership competency. You will be the ultimate Trusted Advisor, capable of engaging business and technology leaders within the worlds largest enterprises, and guiding their strategic AI-enabled journey. The Country Leader, AI Architecture, is responsible for leading the Labs Architectural services within the region. You will need to provide hands-on technical leadership, whilst managing a small team of senior AI architects and consultants. Operating in a fast-moving, highly innovative environment, collaborating with senior Sales and Technical leaders. You will have business responsibility for the provision of innovation-led Labs services. focusing on the design and implementation of advanced AI solutions enabling genuine transformational outcomes. This hands-on leadership role demands deep understanding of AI and related technologies, running in Edge, onprem and Public Cloud environments. Acting at the forefront of our industry you will be fully conversant with Generative AI, and its impact at both the individual employee and strategic organisational level. The ideal candidate will be an established thought-leader in the AI domain, with solid architectural and engineering credentials that you maintain at the highest level. Working ahead of industry trends, deeply passionate about AI-enabled business transformation and demonstrating a strong innovation-led posture. As a thought leader, you will interact frequently with CxO level clients, industry leaders, provide expert opinions, and contribute to HCLs strategic vision. Key Responsibilities Technical & Engineering Leadership Act as ultimate Design Authority for sophisticated AI solutions and related technology architecture. Lead high-level architectural discussions with clients, providing expert guidance on best practices for AI implementations across AI PC, Edge, Data Centre and Public Cloud environments. Ensure solutions align with modern best practices across the full spectrum of platforms and environments. Deep understanding across GPU/NPU, Cognitive Infrastructure, Application and Copilot/agent domains. Contribute to HCLTech thought leadership in the AI & Cloud domains with a deep understanding of open-source (e.g., Kubernetes, OPEA) and partner technologies. Collaborate on joint technical projects with global partners, including Google, Microsoft, AWS, NVIDIA, IBM, Red Hat, Intel, and Dell. Service Delivery & Innovation Design innovative AI solutions from ideation to MVP, rapidly enabling genuine business value. Optimize AI and cloud architectures to meet client requirements, balancing efficiency, accuracy and effectiveness. Assess and review existing complex solutions and recommend architectural improvements to transform applications with latest AI technologies. Drive the adoption of cutting-edge GenAI technologies spearheading initiatives that push the boundaries of AI capability across the full spectrum of environments. Thought Leadership and Client Engagement Provide expert architectural and strategy guidance to clients on incorporating Generative AI into their business and technology landscape. Conduct workshops, briefings, and strategic dialogues to educate clients on AI benefits and applications, establishing strong, trust-based relationships. Act as a trusted advisor, contributing to technical projects with a strong focus on technical excellence and on-time delivery. Author whitepapers, blogs, and speak at industry events, maintaining a visible presence as a thought leader in AI and associated technologies. Collaboration and Customer Engagement Engage with multiple customers simultaneously, building high-impact consultative relationships. Work closely with internal teams and global partners to ensure seamless collaboration and knowledge sharing across projects. Maintain hands-on technical credibility, staying ahead of industry trends and mentoring others in the organization. Management and Leadership Demonstrable track record building and managing small Architectural or Engineering teams. Support career growth and professional development of the team. Enrich and enable world-class technical excellence across the team; supported by a culture of collaboration, respect, diversity, inclusion and deep trustful relationships. Mandatory Skills & Experience Management & leadership : Demonstrable track record building and leading Architectural or Engineering teams. Proven ability to combine strategic business and commercial skills, performing at the highest-level in senior client relationships. Experience: 10+ years architecture design 10+ years software engineering. 5+ years in a senior Team Leader or similar management position. Significant client-facing engagement within a GSI, system integrator, professional services or technology organization. Technologies: Professional-level expertise in Public Cloud environments (AWS, Azure, Google Cloud). Demonstrable coding proficiency with Python, Java or Go languages. AI Expertise: Advanced machine learning algorithms, GenAI models (e.g., GPT, BERT, DALL-E, GEMINI), NLP techniques. Working familiarity with Copilot solutions, in both software engineering and office productivity domains. Business Expertise: Extensive track record performing a lead technical role in a sales, business-development or other commercial environment. Negotiating and consultative skills; experience leading the complete engagement lifecycle. Communication: Experienced public speaker, with an ability to connect with senior business leaders. Project Methodologies: Agile and Scrum project management. Desired Skills & Experience Knowledge of GenAI operations (LLMOps), experience Governing AI models in production environments. Proficiency in data engineering for AI, including data preprocessing, feature engineering, and pipeline creation. Expertise in AI model fine-tuning and evaluation, with a focus on improving performance for specialized tasks. Copilot design, engineering and extensions. Knowledgeable about Responsible AI, including governance and ethics. Bias mitigation, with experience in implementing strategies to ensure fair and unbiased AI solutions. Deep Learning Frameworks (TensorFlow, PyTorch) Innovation and Emerging Technology Trends Strategic AI Vision and Road mapping Enthusiastic about working in a fast-paced environment using the latest technologies, and passionate about HCLs dynamic and high-energy Lab culture. Verifiable Certification Recognized Professional certification from Google, Microsoft or AWS in an AI and/or Cloudrelated domain. Soft Skills and Behavioural Competencies Exemplary communication and leadership skills, capable of inspiring teams and making strategic decisions that align with business goals. Demonstrates a strong customer orientation, innovative problem-solving abilities, and effective cross-cultural collaboration. Expert at driving organizational change and fostering a culture of innovation.
Posted 3 months ago
5.0 - 10.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Develop and deploy ML pipelines using MLOps tools, build FastAPI-based APIs, support LLMOps and real-time inferencing, collaborate with DS/DevOps teams, ensure performance and CI/CD compliance in AI infrastructure projects. Required Candidate profile Experienced Python developer with 4–8 years in MLOps, FastAPI, and AI/ML system deployment. Exposure to LLMOps, GenAI models, containerized environments, and strong collaboration across ML lifecycle
Posted 3 months ago
12.0 - 18.0 years
35 - 40 Lacs
Chennai
Work from Office
Tech stack required: Programming languages: Python Public Cloud: AzureFrameworks: Vector Databases such as Milvus, Qdrant/ ChromaDB, or usage of CosmosDB or MongoDB as Vector stores. Knowledge of AI Orchestration, AI evaluation and Observability Tools. Knowledge of Guardrails strategy for LLM. Knowledge on Arize or any other ML/LLM observability tool. Experience: Experience in building functional platforms using ML, CV, LLM platforms. Experience in evaluating and monitoring AI platforms in production Nice to have requirements to the candidate Excellent communication skills, both written and verbal. Strong problem-solving and critical-thinking abilities. Effective leadership and mentoring skills. Ability to collaborate with cross-functional teams and stakeholders. Strong attention to detail and a commitment to delivering high-quality solutions. Adaptability and willingness to learn new technologies. Time management and organizational skills to handle multiple projects and priorities.
Posted 3 months ago
5.0 - 8.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role: Gen AI Engineer Exp: 5 to 8 yrs. Loc: Bangalore, Pune, Hyderabad NP: Immediate joiners, who can join in 30 days. Required Skills: Python, Large Language Models (LLM), Machine Learning (ML), Generative AI
Posted 3 months ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description: Oracle Cloud Infrastructure (OCI) is a pioneering force in cloud technology, merging the agility of startups with the robustness of an enterprise software leader. Within OCI, the Oracle Generative AI Service team spearheads innovative solutions at the convergence of artificial intelligence and cloud infrastructure. As part of this team, you'll contribute to large-scale cloud solutions utilizing cutting-edge machine learning technologies, aimed at addressing complex global challenges. Join us to create innovative solutions using top-notch machine learning technologies to solve global challenges. We're looking for an experienced Principal Applied Data Scientist to join our OCI Gen-AI Solutions team for strategic customers. In this role, you'll collaborate with applied scientists and product managers to design, develop, and deploy tailored Gen-AI solutions with an emphasis on Large Language Models (LLMs), Agents, MPC and Retrieval Augmented Generation (RAG) with large OpenSearch clusters. As part of the OCI Gen AI and Data Solutions for strategic customers team, you will be responsible for developing innovative Gen AI and data services for our strategic customers.As a Principal Applied Data Scientist, you'll lead the development of advanced Gen AI solutions using the latest ML technologies combined with Oracle's cloud expertise. Your work will significantly impact sectors like financial services, telecom, healthcare, and code generation by creating distributed, scalable, high-performance solutions for strategic customers. Work directly with key customers and accompany them on their Gen AI journey - understanding their requirements, help them envision and design and build the right solutions and work together with their ML engineering to remove blockers. You will dive deep into model structure to optimize model performance and scalability. You will build state of art solutions with brand new technologies in this fast-evolving area. You will configure large scale OpenSearch clusters, setting up ingestion pipelines to get the data into the OpenSearch. You will diagnose, troubleshoot, and resolve issues in AI model training and serving. You may also perform other duties as assigned. Build re-usable solution patterns and reference solutions / showcases that can apply across multiple customers. Be an enthusiastic, self-motivated, and a great collaborator. Be our product evangelist - engage directly with customers and partners, participate and present in external events and conferences, etc. Qualifications and experience Bachelors or master's in computer science or equivalent technical field with 10+ years of experience Able to optimally communicate technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams and presentations). Demonstrated experience in designing and implementing scalable AI models and solutions for production,relevant professional experience as end-to-end solutions engineer or architect (data engineering, data science and ML engineering is a plus), with evidence of close collaborations with PM and Dev teams. Experience with OpenSearch, Vector databases, PostgreSQL and Kafka Streaming. Practical experience with setting up and finetuning large OpenSearch Clusters. Experience in setting up data ingestion pipelines with OpenSearch. Experience with search algorithms, indexing, optimizing latency and response times. Practical experience with the latest technologies in LLM and generative AI, such as parameter-efficient fine-tuning, instruction fine-tuning, and advanced prompt engineering techniques like Tree-of-Thoughts. Familiarity with Agents and Agent frameworks and Model Predictive Control (MPC) Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Strong publication record, including as a lead author or reviewer, in top-tier journals or conferences. Ability and passion to mentor and develop junior machine learning engineers. Proficient in Python and shell scripting tools. Preferred Qualifications : Masters or Bachelor's in related field with 5+ years relevant experience Experience with RAG based solutions architecture. Familiarity in OpenSearch and Vector stores as a knowledge store Knowledge of LLM and experience delivering, Generative AI And Agent models are a significant plus. Familiarity and experience with the latest advancements in computer vision and multimodal modeling is a plus. Experience with semantic search, multi-modal search and conversational search. Experience in working on a public cloud environment, and in-depth knowledge of IaaS/PaaS industry and competitive capabilities.Experience with popular model training and serving frameworks like KServe, KubeFlow, Triton etc. Experience with LLM fine-tuning, especially the latest parameter efficient fine-tuning technologies and multi-task serving technologies. Deep technical understanding of Machine Learning, Deep Learning architectures like Transformers, training methods, and optimizers. Experience with deep learning frameworks (such as PyTorch, JAX, or TensorFlow) and deep learning architectures (especially Transformers). Experience in diagnosing, fixing, and resolving issues in AI model training and serving. Career Level - IC4
Posted 3 months ago
10.0 - 18.0 years
30 - 45 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role - Senior Data Scientist / Senior Gen AI Engineer Exp Range - 8 to 18 yrs Position - Permanent Fulltime Company - Data Analytics & AIML MNC Location - Hyderabad, Pune, Bangalore (Relocation accepted) About the Role: We are seeking a Software Engineer with expertise in Generative AI and Microsoft technologies to design, develop, and deploy AI-powered solutions using the Microsoft ecosystem. You will work with cross-functional teams to build scalable applications leveraging generative AI models and Azure services. Skills Required: Experience with Large Language Models (LLMs) like GPT, LLaMA, Claude, etc. Proficiency in Python for building and fine-tuning AI/ML models Familiarity with LangChain , LLMOps , or RAG (Retrieval-Augmented Generation) pipelines Experience with Vector Databases (e.g. FAISS, Pinecone, Weaviate) Knowledge of Prompt Engineering and model evaluation techniques Exposure to cloud platforms (Azure, AWS or GCP) for deploying GenAI solutions Preferred Skills: Experience with Azure OpenAI , Databricks or Microsoft Fabric Hands-on with Hugging Face Transformers , OpenAI APIs or custom model training
Posted 3 months ago
0.0 years
3 - 6 Lacs
Delhi, Delhi, IN
On-site
About the job: Key responsibilities: 1. Build AI-driven tools and products using APIs (OpenAI, Gemini, etc.) 2. Design and fine-tune prompts for various use cases. 3. Integrate vector databases (Pinecone, ChromaDB) for retrieval-augmented generation (RAG) 4. Use tools like LangChain or LlamaIndex for multi-step worklows 5. Collaborate with designers, content teams, and founders to turn ideas into polished tools Who can apply: Only those candidates can apply who: are Computer Science Engineering students Salary: ₹ 3,20,000 - 6,50,000 /year Experience: 0 year(s) Deadline: 2025-06-22 23:59:59 Skills required: Natural Language Processing (NLP), Deep Learning, Prompt Engineering, ChatGPT, Claude, Gemini, LLMOps and Model fine-tuning Other Requirements: 1. Degree Btech - AI/Ml, others ( who has done AI/ML projects) 2. Strong understanding of LLM APIs (OpenAI, Claude, Gemini, etc.) 3. REST API integration and deployment knowledge 4. GitHub portfolio with working AI tools or integrations About Company: Stirring Minds is a premier startup ecosystem in India, dedicated to helping businesses launch, scale, and succeed. As a leading incubator, we provide funding, co-working spaces, and mentorship to support the growth of innovative companies. In addition to our incubator services, we also host the largest startup event in the country known as Startup Summit Live, bringing together entrepreneurs and industry leaders to connect, learn, and collaborate. Our community-driven approach extends beyond our event and incubator offerings, as we work to create communities of like-minded individuals who can support and learn from one another. We have been recognized by top media outlets both in India and internationally, including the BBC, The Guardian, Entrepreneur, and Business Insider. Our goal is to provide a comprehensive ecosystem for startups and help turn their ideas into reality.
Posted 3 months ago
0.0 years
3 - 4 Lacs
IN
Remote
About the job: Key responsibilities: 1. Design and develop scalable backend systems and APIs for our AI-powered SaaS platform using Python and Node.js 2. Build and maintain cloud infrastructure on AWS, including configuration and management of S3, DynamoDB, SNS, EC2, and CloudWatch services 3. Implement and optimize data processing pipelines for machine learning model deployment and integration 4. Collaborate with data scientists to integrate AI models into production systems and ensure efficient model serving 5. Deploy and monitor applications using DevOps practices and LLMOps for large language model implementations 6. Create robust API endpoints that connect our frontend applications with AI functionalities 7. Design and implement efficient database schemas and queries optimized for AI applications 8. Develop and maintain secure authentication and authorization systems for our platform 9. Write clean, maintainable, and well-tested code following best practices 10. Troubleshoot and resolve complex technical issues in production environments Additional candidate preferences: 1. Computer Science or related Engineering degree preferred 2. Experience with containerization technologies like Docker 3. Familiarity with AI model serving platforms Who can apply: Only those candidates can apply who: are Computer Science Engineering students Salary: ₹ 3,10,000 - 4,60,000 /year Experience: 0 year(s) Deadline: 2025-06-16 23:59:59 Other perks: 5 days a week Skills required: Python, Node.js, Artificial intelligence, DevOps, Amazon EC2, Amazon S3, Amazon CloudWatch, Amazon SNS, Amazon DynamoDB and LLMOps Other Requirements: 1. Computer Science or related Engineering degree preferred 2. Experience with containerization technologies like Docker 3. Familiarity with AI model serving platforms and ML workflows About Company: Smartify is a marketplace for automation companies and also India's leading home automation store. We are trying to reduce the knowledge-execution gap and encourage early-adopters in the IoT space to launch their products and get to the mainstream market.
Posted 4 months ago
1 - 6 years
7 - 14 Lacs
Hyderabad
Work from Office
Position - AI Engineer As an AI Engineer, you will design, implement, and optimize machine learning models and AI systems to solve complex problems. You will work closely with cross-functional teams to integrate AI solutions into our products and services, ensuring scalability and efficiency. Key Responsibilities: Application Development: Design and develop AI-powered applications using state-of-the-art LLM models and generative AI techniques. Implement scalable solutions that integrate LLM-powered tools into existing workflows or standalone products. Model Optimization: Fine-tune pre-trained LLM models to meet specific application requirements. Optimize model performance for real-time and high-throughput environments. LLMOps Implementation: Develop and maintain pipelines for model deployment, monitoring, and retraining. Set up robust systems for model performance monitoring and diagnostics. Ensure reliable operations through analytics and insights into model behavior. Vector Databases and Data Management: Utilize vector databases for efficient storage and retrieval of embeddings. Integrate databases with LLM applications to enhance query and recommendation systems. Collaboration and Innovation: Work closely with cross-functional teams, including product managers, data scientists, and software engineers. Stay up-to-date with advancements in generative AI and LLM technologies to drive innovation. Skills and Experience 3+ years of experience in AI/ML development, with a focus on generative AI and LLMs. Proficiency in programming languages such as Python and frameworks like PyTorch or TensorFlow. Hands-on experience in fine-tuning and deploying LLM models (e.g., GPT, BERT, etc.). Familiarity with LLMOps practices, including pipeline automation, monitoring, and analytics. Experience with vector databases (e.g., Pinecone, Weaviate, or similar). Strong knowledge of natural language processing (NLP) and machine learning principles. You should certainly apply if: Understanding of MLOps principles and cloud platforms (AWS, GCP, Azure). Familiarity with prompt engineering and reinforcement learning from human feedback (RLHF). Experience in building real-time applications powered by generative AI. Knowledge of distributed systems and scalable architectures.
Posted 4 months ago
1.0 years
3 - 5 Lacs
IN
Remote
About the job: We empower the people who build the world. Taiy .AI is the world's largest infrastructure construction data-mesh technology and the first AI platform for the global infrastructure construction industry. Our clients include some of the largest construction firms, suppliers, and the government. About The Team: We are looking for a Python Engineer to help support and lead our data engineering ops. Key Responsibilities: 1. Developing and executing processes for monitoring data sanity, checking for data availability and reliability. 2. Understanding the business drivers and building insights through data. 3. Partner with stakeholders at all levels to establish current and ongoing data support and reporting needs. 4. Ensure continuous data accuracy and recognize data discrepancies in systems that require immediate attention/escalation. 5. Become an expert in the company's data warehouse and other data storage tools, understanding the definition, context, and proper use of all attributes and metrics. 6. Creating dashboards based on business requirements. 7. Distributed systems, Scala, cloud, Caching, CI/CD (Continuous integration and deployment), Distributed logging, Data pipeline, Recommendation Engine, Data at Rest Encryption What To Bring: 1. Graduate/Post Graduate degree in Computer Science or Engineering. 2. 1-3 years of hands-on experience with AWS Open Search v1.0 or Elastic Search 7.9 3. 3+ years of work experience on Scala 4. Must be able to drive, design, code, review the work, and assist the teams 5. Good problem-solving skills 6. Good oral and written communication in English 7. Should be open to/have experience of working in a fast-paced delivery environment 8. Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. 9. Good to have experience on Elasticsearch and Spark-Elasticsearch 10. Knowledge of Garbage Collection and experience in GC tuning. 11. Knowledge of algorithms like sorting, heap/stack, queue, search, etc. 12. Experience with Git and build tools like Gradle/Maven/SBT 13. Should be able to write complex queries independently. 14. Strong knowledge of programming languages like Python, Scala, etc. 15. Ability to work independently and take ownership of things. 16. An analytical mindset and strong attention to detail. 17. Good verbal & written communication skills for coordinating across teams. Who can apply: Only those candidates can apply who: have minimum 1 years of experience are Computer Science Engineering students Salary: ₹ 3,00,000 - 5,00,000 /year Experience: 1 year(s) Deadline: 2025-06-05 23:59:59 Other perks: 5 days a week Skills required: Python, Selenium, Machine Learning, REST API, Data Extraction, Data Engineering and LLMOps About Company: Taiyo is a Silicon Valley startup that aggregates, predicts, and visualizes the world's data so customers don't have to. We are a globally-distributed team with a focus on the infrastructure vertical. The Taiyo team was founded by an interdisciplinary group of experts from Stanford University's AI Institute, World Bank, International Monetary Fund, and UC Berkeley.
Posted 4 months ago
2.0 - 7.0 years
1 - 6 Lacs
hyderabad, bengaluru, mumbai (all areas)
Work from Office
Gen AI Developer with 2 to 7 years of hands-on experience in to architect and deliver cutting-edge AI solutions using Large Language Models (LLMs), diffusion models, and other generative frameworks.
Posted Date not available
3.0 - 6.0 years
15 - 25 Lacs
bengaluru
Work from Office
Position : Machine Learning Engineer - Generative AI / LLMs (Onsite - Bengaluru) Experience : 3+ years of industry experience in ML, software engineering, and data engineering. Education : Masters degree or equivalent experience in Machine Learning. Location : Bangalore (HSR Layout) Job Description: We are looking for a talented Machine Learning Engineer to join our AI team at our Bengaluru office (Onsite Only). In this role, you will work on cutting-edge solutions leveraging Generative AI, Large Language Models (LLMs), NLP, and Computer Vision to solve complex real-world problems. Responsibilities: Design, build, and deploy ML models for NLP, Computer Vision, LLMs, and Generative AI use cases. Develop and maintain robust, scalable ML pipelines for training, evaluation, and deployment. Optimize model performance using cloud-based GPU resources and best practices. Implement scalable inference systems, A/B testing, and monitor production models. Collaborate with cross-functional teams to deliver AI-driven product features. Follow best practices in MLOps, LLMOps, Kubernetes, and Docker. Required Skills: Strong knowledge of Machine Learning, Deep Learning, NLP, and LLMs. Experience with Python, PyTorch, TensorFlow. Familiarity with Generative AI frameworks: Hugging Face, LangChain, MLFlow, LangGraph, LangFlow. Cloud platforms: AWS (SageMaker, Bedrock), Azure AI. Databases: MongoDB, PostgreSQL, Pinecone, ChromaDB. MLOps tools, Kubernetes, Docker. Preferred Qualifications: 3+ years of experience in ML/AI product development. Proven experience building and deploying NLP, LLM, and Generative AI solutions. Experience working with LLMOps best practices. Strong programming skills in Python and familiarity with JavaScript. Interested candidates kindly share your CV and below details to usha.sundar@adecco.com 1) Present CTC (Fixed + VP) - 2) Expected CTC - 3) No. of years experience - 4) Notice Period - 5) Offer-in hand - 6) Reason of Change - 7) Present Location -
Posted Date not available
5.0 - 8.0 years
30 - 45 Lacs
hyderabad, bengaluru, delhi / ncr
Work from Office
About the Role We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications. In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions. You will also be expected to apply modern LLMOps practices, handle schema-constrained generation, optimize cost and latency trade-offs, mitigate hallucinations, and ensure robust safety, personalization, and observability across GenAI systems. Responsibilities Generative AI Pipeline Development Design and implement scalable and modular pipelines for data ingestion, transformation, and orchestration across GenAI workloads. Manage data and model flow across LLMs, embedding services, vector stores, SQL sources, and APIs. Build CI/CD pipelines with integrated prompt regression testing and version control. Use orchestration frameworks like LangChain or LangGraph for tool routing and multi-hop workflows. Monitor system performance using tools like Langfuse or Prometheus. Data and Document Ingestion Develop systems to ingest unstructured (PDF, OCR) and structured (SQL, APIs) data. Apply preprocessing pipelines for text, images, and code. Ensure data integrity, format consistency, and security across sources. AI Service Integration Integrate external and internal LLM APIs (OpenAI, Claude, Mistral, Qwen, etc.). Build internal APIs for smooth backend-AI communication. Optimize performance through fallback routing to classical or smaller models based on latency or cost budgets. Use schema-constrained prompting and output filters to suppress hallucinations and maintain factual accuracy. Retrieval-Augmented Generation (RAG) Pipelines Build hybrid RAG pipelines using vector similarity (FAISS/Qdrant) and structured data (SQL/API). Design custom retrieval strategies for multi-modal or multi-source documents. Apply post-retrieval ranking using DPO or feedback-based techniques. Improve contextual relevance through re-ranking, chunk merging, and scoring logic. LLM Integration and Optimization Manage prompt engineering, model interaction, and tuning workflows. Implement LLMOps best practices: prompt versioning, output validation, caching (KV store), and fallback design. Optimize generation using temperature tuning, token limits, and speculative decoding. Integrate observability and cost-monitoring into LLM workflows. Backend Services Ownership Design and maintain scalable backend services supporting GenAI applications. Implement monitoring, logging, and performance tracing. Build RBAC (Role-Based Access Control) and multi-tenant personalization. Support containerization (Docker, Kubernetes) and autoscaling infrastructure for production. Required Skills and Qualifications Education Bachelors or Masters in Computer Science, Artificial Intelligence, Machine Learning, or related field. Experience 5+ years of experience in AI/ML engineering with end-to-end pipeline development. Hands-on experience building and deploying LLM/RAG systems in production. Strong experience with public cloud platforms (AWS, Azure, or GCP). Technical Skills Proficient in Python and libraries such as Transformers, SentenceTransformers, PyTorch. Deep understanding of GenAI infrastructure, LLM APIs, and toolchains like LangChain/LangGraph. Experience with RESTful API development and version control using Git. Knowledge of vector DBs (Qdrant, FAISS, Weaviate) and similarity-based retrieval. Familiarity with Docker, Kubernetes, and scalable microservice design. Experience with observability tools like Prometheus, Grafana, or Langfuse. Generative AI Specific Skills Knowledge of LLMs, VAEs, Diffusion Models, GANs. Experience building structured + unstructured RAG pipelines. Prompt engineering with safety controls, schema enforcement, and hallucination mitigation. Experience with prompt testing, caching strategies, output filtering, and fallback logic. Familiarity with DPO, RLHF, or other feedback-based fine-tuning methods. Soft Skills Strong analytical, problem-solving, and debugging skills. Excellent collaboration with cross-functional teams: product, QA, and DevOps. Ability to work in fast-paced, agile environments and deliver production-grade solutions. Clear communication and strong documentation practices. Preferred Qualifications Experience with OCR, document parsing, and layout-aware chunking. Hands-on with MLOps and LLMOps tools for Generative AI. Contributions to open-source GenAI or AI infrastructure projects. Knowledge of GenAI governance, ethical deployment, and usage controls. Experience with hallucination suppression frameworks like Guardrails.ai, Rebuff, or Constitutional AI. Experience and Shift Experience: 5+ years Shift Time: 2:30 PM to 11:30 PM IST Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted Date not available
5.0 - 10.0 years
10 - 20 Lacs
bengaluru
Work from Office
Job Title : LLMOps Engineer GenAI Platforms Location : Bangalore Minimum : 3 years of relevant experience Maximum : 5 years total experience Must include : Strong MLOps experience Hands-on with LLMs for at least 2 years OR Transitioned from MLOps to LLMOps with understanding of LLM lifecycle About the Role: We are looking for a forward-thinking LLMOps Engineer to join our team and help build the next generation of secure, scalable, and responsible Generative AI (GenAI) platforms. This role will focus on establishing governance, security, and operational best practices while enabling development teams to build high-performing GenAI applications. You will also work closely with GenAI agents and integrate LLMs from multiple providers to support diverse use cases. Key Responsibilities: Design and implement governance frameworks for GenAI platforms, ensuring compliance with internal policies and external regulations (e.g., GDPR, AI Act). Define and enforce responsible AI practices including fairness, transparency, explainability, and auditability. Implement robust security protocols including IAM, data encryption, secure API access, and model sandboxing. Collaborate with security teams to conduct risk assessments and ensure secure deployment of LLMs. Build and maintain scalable LLMOps pipelines for model training, fine-tuning, evaluation, deployment, and monitoring. Automate model lifecycle management with CI/CD, versioning, rollback, and observability. Develop and manage GenAI agents capable of reasoning, planning, and tool use. Integrate and orchestrate LLMs from multiple providers (e.g., OpenAI, Anthropic, Cohere, Google, Azure OpenAI) to support hybrid and fallback strategies. Optimize prompt engineering, context management, and agent memory for production use. Ensure high availability, low latency, and cost-efficiency of GenAI workloads across cloud and hybrid environments. Implement monitoring and alerting for model drift, hallucinations, and performance degradation. Partner with GenAI developers to embed best practices and reusable components (SDKs, templates, APIs). Provide technical guidance and documentation to accelerate development and ensure platform consistency. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in MLOps, DevOps, or platform engineering, with 1–2 years in LLM/GenAI environments. Deep understanding of LLMs, GenAI agents, prompt engineering, and inference optimization. Experience with LangChain, LlamaIndex, Langraph or similar agent frameworks. Hands-on with MLflow, or equivalent tools. Proficient in Python, containerization (Docker) and cloud platforms (AWS/GCP/Azure). Familiarity with AI governance frameworks and responsible AI principles. Experience with vector databases (e.g., FAISS, Pinecone), RAG pipelines, and model evaluation frameworks. Knowledge of Responsible AI, red-teaming, and OWASP security priciples.
Posted Date not available
5.0 - 6.0 years
10 - 20 Lacs
bengaluru
Work from Office
Job Title : LLMOps Engineer GenAI Platforms Location : Bangalore Minimum : 3 years of relevant experience Maximum : 5 years total experience Must include : Strong MLOps experience Hands-on with LLMs for at least 2 years OR Transitioned from MLOps to LLMOps with understanding of LLM lifecycle About the Role: We are looking for a forward-thinking LLMOps Engineer to join our team and help build the next generation of secure, scalable, and responsible Generative AI (GenAI) platforms. This role will focus on establishing governance, security, and operational best practices while enabling development teams to build high-performing GenAI applications. You will also work closely with GenAI agents and integrate LLMs from multiple providers to support diverse use cases. Key Responsibilities: Design and implement governance frameworks for GenAI platforms, ensuring compliance with internal policies and external regulations (e.g., GDPR, AI Act). Define and enforce responsible AI practices including fairness, transparency, explainability, and auditability. Implement robust security protocols including IAM, data encryption, secure API access, and model sandboxing. Collaborate with security teams to conduct risk assessments and ensure secure deployment of LLMs. Build and maintain scalable LLMOps pipelines for model training, fine-tuning, evaluation, deployment, and monitoring. Automate model lifecycle management with CI/CD, versioning, rollback, and observability. Develop and manage GenAI agents capable of reasoning, planning, and tool use. Integrate and orchestrate LLMs from multiple providers (e.g., OpenAI, Anthropic, Cohere, Google, Azure OpenAI) to support hybrid and fallback strategies. Optimize prompt engineering, context management, and agent memory for production use. Ensure high availability, low latency, and cost-efficiency of GenAI workloads across cloud and hybrid environments. Implement monitoring and alerting for model drift, hallucinations, and performance degradation. Partner with GenAI developers to embed best practices and reusable components (SDKs, templates, APIs). Provide technical guidance and documentation to accelerate development and ensure platform consistency. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in MLOps, DevOps, or platform engineering, with 1–2 years in LLM/GenAI environments. Deep understanding of LLMs, GenAI agents, prompt engineering, and inference optimization. Experience with LangChain, LlamaIndex, Langraph or similar agent frameworks. Hands-on with MLflow, or equivalent tools. Proficient in Python, containerization (Docker) and cloud platforms (AWS/GCP/Azure). Familiarity with AI governance frameworks and responsible AI principles. Experience with vector databases (e.g., FAISS, Pinecone), RAG pipelines, and model evaluation frameworks. Knowledge of Responsible AI, red-teaming, and OWASP security priciples.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |