Jobs
Interviews

181 Pinecone Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

18 - 22 Lacs

Pune

Work from Office

We are looking for a GenAI/ML Engineer to design, develop, and deploy cutting-edge AI/ML models and Generative AI applications . This role involves working on large-scale enterprise use cases, implementing Large Language Models (LLMs) , building Agentic AI systems , and developing data ingestion pipelines . The ideal candidate should have hands-on experience with AI/ML development , Generative AI applications , and a strong foundation in deep learning , NLP , and MLOps practices. Key Responsibilities Design, develop , and deploy AI/ML models and Generative AI applications for various enterprise use cases. Implement and integrate Large Language Models (LLMs) using frameworks such as LangChain , LlamaIndex , and RAG pipelines . Develop Agentic AI systems capable of multi-step reasoning and autonomous decision-making . Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval techniques. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to deploy AI solutions and enhance the AI stack. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training , serving , and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong experience with LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficiency in Python and key ML/DL libraries : TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion , indexing , and retrieval pipelines for real-world enterprise use cases. Hands-on experience with Azure cloud services and Databricks . Proven track record in designing CI/CD pipelines and using MLOps tools like MLflow , DVC , or Kubeflow . Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills , with the ability to explain complex AI concepts to non-technical stakeholders. Ability to collaborate effectively in agile , cross-functional teams . A growth mindset , eager to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases such as FAISS , Pinecone , or Weaviate . Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools . Understanding of AI security , responsible AI , and model governance . Role Dimensions Design and implement innovative GenAI applications to address complex business problems. Work on large-scale, complex AI solutions in collaboration with cross-functional teams. Take ownership of the end-to-end AI pipeline , from model development to deployment and monitoring. Success Measures (KPIs) Successful deployment of AI and Generative AI applications . Optimization of data pipelines and model performance at scale. Contribution to the successful adoption of AI-driven solutions within enterprise use cases. Effective collaboration with cross-functional teams, ensuring smooth deployment of AI workflows. Competency Alignment AI/ML Development : Expertise in building and deploying scalable and efficient AI models. Generative AI : Strong hands-on experience in Generative AI , LLMs , and RAG frameworks. MLOps : Proficiency in designing and maintaining CI/CD pipelines and implementing MLOps practices . Cloud Platforms : Experience with Azure and Databricks for AI model training and serving.

Posted 1 month ago

Apply

6.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Note: 1. Immediate to 30 days serving notice period 2.Who are available for face to face and video can apply Please add more profile for LLM engineer for weekend drive, below is the mandatory skills which delivery is looking for: 5+ years of relevant experience in Python , AI and machine learning - 2+ years of relevant experience in Gen AI LLM Hands-on experience with at least 1 end-to-end GenAI project Worked with LLMs such as GPT, Gemini, Claude, LLaMA, etc LLM skills: RAG, LangChain, Transformers, TensorFlow, PyTorch, spaCy Experience with REST API integration (e.g. FastAPI, Flask) Proficient in prompt types: zero-shot, few-shot, chain-of-thought - Knowledge of model training, fine-tuning, and deployment workflows LLMOPs - atleast 1 cloud (AZURE/AWS) , GITHUB , Docker/Kubernetes , CICD Pipeline - Comfortable with embedding models and vector databases (e.g. FAISS, Pinecone)

Posted 1 month ago

Apply

5.0 - 10.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Job Title: AI/ML Architect GenAI, LLMs & Enterprise Automation Location: Bangalore Experience: 8+ years (including 4+ years in AI/ML architecture on cloud platforms) Role Summary We are seeking an experienced AI/ML Architect to define and lead the design, development, and scaling of GenAI-driven solutions across our learning and enterprise platforms. This is a senior technical leadership role where you will work closely with the CTO and product leadership to architect intelligent systems powered by LLMs, RAG pipelines, and multi-agent orchestration. You will own the AI solution architecture end-to-endfrom model selection and training frameworks to infrastructure, automation, and observability. The ideal candidate will have deep expertise in GenAI systems and a strong grasp of production-grade deployment practices across the stack. Must-Have Skills AI/ML solution architecture experience with production-grade systems Strong background in LLM fine-tuning (SFT, LoRA, PEFT) and RAG frameworks Experience with vector databases (FAISS, Pinecone) and embedding generation Proficiency in LangChain, LangGraph , LangFlow, and prompt engineering Deep cloud experience (AWS: Bedrock, ECS, Lambda, S3, IAM) Infra automation using Terraform, CI/CD via GitHub Actions or CodePipeline Backend API architecture using FastAPI or Node.js Monitoring & observability using Langfuse, LangWatch, OpenTelemetry Python, Bash scripting, and low-code/no-code tools (e.g., n8n) Bonus Skills Hands-on with multi-agent orchestration frameworks (CrewAI, AutoGen) Experience integrating AI/chatbots into web, mobile, or LMS platforms Familiarity with enterprise security, data governance, and compliance frameworks Exposure to real-time analytics and event-driven architecture You’ll Be Responsible For Defining the AI/ML architecture strategy and roadmap Leading design and development of GenAI-powered products and services Architecting scalable, modular, and automated AI systems Driving experimentation with new models, APIs, and frameworks Ensuring robust integration between model, infra, and app layers Providing technical guidance and mentorship to engineering teams Enabling production-grade performance, monitoring, and governance

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad

Remote

Crew AI Engineer Remote Contractual-6 months Job Description : We are looking for people with strong python skills, knowledge of multi agent frameworks like Crew AI, knowledge of RAG concepts mandatory. Good conceptual knowledge of LLM concepts Langraph and Langchain Required Skills : 5-8 years of experience in AI/ML or automation engineering. Strong hands-on experience with CrewAI or other LLM orchestration frameworks like LangChain, AutoGen, or Semantic Kernel. Proficiency in Python, including experience with async programming and API integration. Deep understanding of LLMs (OpenAI, Anthropic, Mistral, etc.) and prompt engineering. Familiarity with vector databases (e.g., Pinecone, FAISS, Chroma) and embeddings. Experience building and deploying production-ready agent-based solutions. Strong problem-solving skills and ability to translate business requirements into technical implementations.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Senior Software Engineer-AI/ML Backend Developer Experience: 4-6 years Category: Software Development/ Engineering Location: Bangalore/Hyderabad/Chennai/Pune/Mumbai Shift Timing: General Shift Position ID: J0725-0150 Employment Type: Full Time Education Qualification: Bachelor's degree in computer science or related field or higher with minimum 4 years of relevant experience. We are seeking an experienced AI/ML Backend Developer to join our dynamic technology team. The ideal candidate will have a strong background in developing and deploying machine learning models, implementing AI algorithms, and managing backend systems and integrations. You will play a key role in shaping the future of our technology by integrating cutting-edge AI/ML techniques into scalable backend solutions. Your future duties and responsibilities Develop, optimize, and maintain backend services for AI/ML applications. Implement and deploy machine learning models to production environments. Collaborate closely with data scientists and frontend engineers to ensure seamless integration of backend APIs and services. Monitor and improve the performance, reliability, and scalability of existing AI/ML services. Design and implement robust data pipelines and data processing workflows. Identify and solve performance bottlenecks and optimize AI/ML algorithms for production. Stay current with emerging AI/ML technologies and frameworks to recommend and implement improvements. Required qualifications to be successful in this role Must-have Skills: - Python, TensorFlow, PyTorch, scikit-learn - Machine learning frameworks: TensorFlow, PyTorch, scikit-learn - Backend development frameworks: Flask, Django, FastAPI - Cloud technologies: AWS, Azure, Google Cloud Platform (GCP) - Containerization and orchestration: Docker, Kubernetes - Data management and pipeline tools: Apache Kafka, Apache Airflow, Spark - Database technologies: SQL databases (PostgreSQL, MySQL), NoSQL databases (MongoDB, Cassandra) - Vector Databases: Pinecone, Milvus, Weaviate - Version Control: Git - Continuous Integration/Continuous Deployment (CI/CD) pipelines: Jenkins, GitHub Actions, GitLab CI/CD Minimum of 4 years of experience developing backend systems, specifically in AI/ML contexts. Proven experience in deploying machine learning models and AI-driven applications in production. Solid understanding of machine learning concepts, algorithms, and deep learning techniques. Proficiency in writing efficient, maintainable, and scalable backend code. Experience working with cloud platforms (AWS, Azure, Google Cloud). Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good-to-have Skills: - Java (preferred), Scala (optional) Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team, one of the largest IT and business consulting services firms in the world.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be working as an AI Engineer with expertise in Speech-to-text and Text Generation to tackle a Conversational AI challenge for a client in EMEA. The project aims to transcribe conversations and utilize generative AI-powered text analytics for enhancing engagement strategies and decision-making processes. Your main responsibilities will include developing Conversational AI & Call Transcription solutions, creating NLP & Generative AI Applications, performing Sentiment Analysis & Decision Support tasks, and handling AI Deployment & Scalability aspects. You will be expected to work on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key technical skills required for this role include a strong background in Speech-to-Text (ASR), NLP, and Conversational AI, along with hands-on experience in tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text, Python, PyTorch, TensorFlow, Hugging Face Transformers, LLM fine-tuning, RAG-based architectures, LangChain, and Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB). Experience in deploying AI models using Docker, Kubernetes, FastAPI, Flask will be essential. In addition to technical skills, soft skills such as translating AI insights into business impact, problem-solving abilities, and effective communication skills to collaborate with cross-functional teams will be crucial for success in this role. Preferred qualifications include experience in healthcare, pharma, or life sciences NLP use cases, a background in knowledge graphs, prompt engineering, and multimodal AI, as well as familiarity with Reinforcement Learning (RLHF) for enhancing conversation models.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a highly motivated Mid-Level AI Engineer to join our growing AI team. Your main responsibility will be to develop intelligent applications using Python, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) systems. Working closely with data scientists, backend engineers, and product teams, you will build and deploy AI-powered solutions that provide real-world value. Your key responsibilities will include designing, developing, and optimizing applications utilizing LLMs such as GPT, LLaMA, and Claude. You will also be tasked with implementing RAG pipelines to improve LLM performance using domain-specific knowledge bases and search tools. Developing and maintaining robust Python codebases for AI-driven solutions will be a crucial part of your role. Additionally, integrating vector databases like Pinecone, Weaviate, and FAISS, as well as embedding models for information retrieval, will be part of your daily tasks. You will work with APIs, frameworks like LangChain and Haystack, and various tools to create scalable AI workflows. Collaboration with product and design teams to define AI use cases and deliver impactful features will also be a significant aspect of your job. Conducting experiments to assess model performance, retrieval relevance, and system latency will be essential for continuous improvement. Staying up-to-date with the latest research and advancements in LLMs, RAG, and AI infrastructure is crucial for this role. To be successful in this position, you should have at least 3-5 years of experience in software engineering or AI/ML engineering, with a strong proficiency in Python. Experience working with LLMs such as OpenAI and Hugging Face Transformers is required, along with hands-on experience in RAG architecture and vector-based retrieval techniques. Familiarity with embedding models like SentenceTransformers and OpenAI embeddings is also necessary. Knowledge of API design, deployment, performance optimization, version control (e.g., Git), containerization (e.g., Docker), and cloud platforms (e.g., AWS, GCP, Azure) is expected. Preferred qualifications include experience with LangChain, Haystack, or similar LLM orchestration frameworks. Understanding NLP evaluation metrics, prompt engineering best practices, knowledge graphs, semantic search, and document parsing pipelines will be beneficial. Experience deploying models in production, monitoring system performance, and contributing to open-source AI/ML projects are considered advantageous for this role.,

Posted 2 months ago

Apply

1.0 - 5.0 years

0 Lacs

jaipur, rajasthan

On-site

As an AI/ML Engineer (Python) at Telepathy Infotech, you will be responsible for building and deploying machine learning and GenAI applications in real-world scenarios. You will be part of a passionate team of technologists working on innovative digital solutions for clients across industries. We value continuous learning, ownership, and collaboration in our work culture. To excel in this role, you should have strong Python skills and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Experience in GenAI development using APIs such as Google Gemini, Hugging Face, Grok, etc. is highly desirable. A solid understanding of ML, DL, NLP, and LLM concepts is essential along with hands-on experience in Docker, Kubernetes, and CI/CD pipeline creation. Familiarity with Streamlit, Flask, FastAPI, MySQL/PostgreSQL, AWS services (EC2, Lambda, RDS, S3, API Gateway), LangGraph, serverless architectures, and vector databases like FAISS, Pinecone, will be advantageous. Proficiency in version control using Git is also required. Ideally, you should have a B.Tech/M.Tech/MCA degree in Computer Science, Data Science, AI, or a related field with at least 1-5 years of relevant experience or a strong project/internship background in AI/ML. Strong communication skills, problem-solving abilities, self-motivation, and a willingness to learn emerging technologies are key qualities we are looking for in candidates. Working at Telepathy Infotech will provide you with the opportunity to contribute to impactful AI/ML and GenAI solutions while collaborating in a tech-driven and agile work environment. You will have the chance to grow your career in one of India's fastest-growing tech companies with a transparent and supportive company culture. To apply for this position, please send your CV to hr@telepathyinfotech.com or contact us at +91-8890559306 for any queries. Join us on our journey of innovation and growth in the field of AI and ML at Telepathy Infotech.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior QA Engineer at our company, you will be leading quality assurance for Generative AI (GenAI) solutions within our Digital Twin platform. Your role will involve focusing on the evaluation, reliability, and guardrails of AI-powered systems in production, going beyond traditional QA practices. Your responsibilities will include designing and implementing end-to-end QA strategies for applications using Node.js integrated with LLMs, RAG, and Agentic AI workflows. You will establish benchmarks and quality metrics for GenAI components, develop evaluation datasheets for LLM behavior validation, and conduct data quality testing for RAG databases. Additionally, you will perform A/B testing, define testing methodologies, collaborate with developers and AI engineers, build QA automation, and lead internal capability development by mentoring QA peers on GenAI testing practices. To be successful in this role, you should have at least 6 years of experience in software quality assurance, with a minimum of 3 years of experience in GenAI or LLM-based systems. You should possess a deep understanding of GenAI quality dimensions, experience in creating and maintaining LLM evaluation datasets, and familiarity with testing retrieval pipelines and RAG architectures. Preferred skills include experience with GenAI tools/platforms, exposure to evaluating LLMs in production settings, familiarity with prompt tuning and few-shot learning in LLMs, and basic scripting knowledge in Python, JavaScript, or TypeScript. If you are a passionate and forward-thinking QA Engineer with a structured QA discipline, hands-on experience in GenAI systems, and a strong sense of ownership, we encourage you to apply for this high-impact role within our innovative team.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You are a talented and passionate RAG (Retrieval-Augmented Generation) Engineer with strong Python development skills, joining our AI/ML team in Bengaluru, India. Your role involves working on cutting-edge NLP solutions that integrate information retrieval techniques with large language models (LLMs). The ideal candidate will have experience with vector databases, LLM frameworks, and Python-based backend development. In this position, your responsibilities will include designing and implementing RAG pipelines that combine retrieval mechanisms with language models, developing efficient and scalable Python code for LLM-based applications, integrating with vector databases like Pinecone, FAISS, Weaviate, and more. You will fine-tune and evaluate the performance of LLMs using various prompt engineering and retrieval strategies, collaborating with ML engineers, data scientists, and product teams to deliver high-quality AI-powered features. Additionally, you will optimize system performance and ensure the reliability of RAG-based applications. To excel in this role, you must possess a strong proficiency in Python and experience in building backend services/APIs, along with a solid understanding of NLP concepts, information retrieval, and LLMs. Hands-on experience with at least one vector database, familiarity with Hugging Face Transformers, LangChain, LLM APIs, and experience in prompt engineering, document chunking, and embedding techniques are essential. Good knowledge of working with REST APIs, JSON, and data pipelines is required. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Data Science, or a related field, experience with cloud platforms like AWS, GCP, or Azure, exposure to tools like Docker, FastAPI, or Flask, and an understanding of data security and privacy in AI applications.,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

You are looking for a visionary Data Science Manager with expertise in Generative AI and Retrieval-Augmented Generation (RAG) to lead AI initiatives from both technical and business perspectives. In this role, you will lead a team of data scientists and ML engineers, design Generative AI models, develop statistical models, and integrate knowledge retrieval systems to enhance performance. Your responsibilities will include mentoring the team, designing scalable AI/ML solutions, implementing Generative AI models, and developing statistical models for forecasting and segmentation. You will also be responsible for integrating databases and retrieval systems, ensuring operational excellence in MLOps, and collaborating with various teams to identify high-impact use cases for GenAI. To qualify for this role, you should have a Masters in Computer Science or related fields, 10+ years of data science experience with 2+ years in GenAI initiatives, proficiency in Python and key libraries, and a strong foundation in statistical analysis and predictive modeling. Experience in cloud platforms, vector databases, and MLOps is essential, along with a background in sectors like legal tech, fintech, retail, or health tech. If you have a proven track record in building and deploying LLMs, RAG systems, and search solutions, along with a knack for influencing product roadmaps and executive strategy, this role is perfect for you. Your ability to translate complex AI concepts into actionable strategies and present findings to non-technical audiences will be crucial in driving AI/ML adoption and contributing to the company's innovation roadmap.,

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Bengaluru, Karnataka, India

On-site

What will you do Voice AI Stack Ownership: Build and own the end-to-end voice bot pipeline ASR, NLU, dialog state management, tool calling, and TTS to create a natural, human-like conversation experience. LLM Orchestration & Tooling: Architect systems using MCP (Model Context Protocol) to mediate structured context between real-time ASR, memory, APIs, and the LLM. RAG Integration: Implement retrieval-augmented generation to ground responses using dealership knowledge bases, inventory data, recall lookups, and FAQs. Vector Store & Memory: Design scalable vector-based search for dynamic FAQ handling, call recall, and user-specific memory embedding. Latency Optimization: Engineer low-latency, streaming ASR + TTS pipelines and fine-tune turn-taking models for natural conversation. Model Tuning & Hallucination Control: Use fine-tuning, LoRA, or instruction tuning to customize tone, reduce hallucinations, and align responses to business goals. Instrumentation & QA Looping: Build robust observability, run real-time call QA pipelines, and analyze interruptions, hallucinations, and fallbacks. Cross-functional Collaboration: Work closely with product, infra, and leadership to scale this bot to thousands of US dealerships. What will make you successful in this role Architect-level thinking: You understand how ASR, LLMs, memory, and tools fit together and can design modular, observable, and resilient systems. LLM Tooling Mastery: You've implemented tool calling, retrieval pipelines, function calls, or prompt chaining across multiple workflows. Fluency in Vector Search & RAG: You know how to chunk, embed, index, and retrieve and how to avoid prompt bloat and token overflow. Latency-First Mindset: You debug token delays, know the cost of each API hop, and can optimize round-trip time to keep calls human-like. Grounding > Hallucination: You know how to trace hallucinations back to weak prompts, missing guardrails, or lack of tool access and fix them. Prototyper at heart: You're not scared of building from scratch and iterating fast, using open-source or hosted tools as needed. What you must have 5+ years in AI/ML or voice/NLP systems with real-time experience Deep knowledge of LLM orchestration, RAG, vector search, and prompt engineering Experience with MCP-style architectures or structured context pipelines between LLMs and APIs/tools Experience integrating ASR (Whisper/Deepgram), TTS (ElevenLabs/Coqui), and OpenAI/GPT-style models Solid understanding of latency optimization, streaming inference, and real-time audio pipelines Hands-on with Python, FastAPI, vector DBs (Pinecone, Weaviate, FAISS), and cloud infra (AWS/GCP) Strong debugging, logging, and QA instincts for hallucination, grounding, and UX behavior

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Full-Stack AI App Developer at EMO Energy, you will play a key role in reimagining urban mobility, energy, and fleet operations through our AI-driven super app. You will have the opportunity to take full ownership of building and deploying a cutting-edge energy infrastructure startup in India. Your responsibilities will include architecting and developing a full-stack AI-enabled application, designing modular frontend views using React.js or React Native, creating intelligent agent interfaces, building secure backend APIs for managing energy and fleet operations, integrating real-time data workflows, implementing fleet tracking dashboards, and optimizing performance across various platforms. Collaboration with the founding team, ops team, and hardware teams will be essential to iterate fast and solve real-world logistics problems. The ideal candidate for this role should have a strong command of front-end frameworks such as React.js, experience with back-end technologies like FastAPI, Node.js, or Django, proficiency in TypeScript or Python, familiarity with GCP services, Docker, GitHub Actions, and experience with mobile integrations and AI APIs. End-to-end ownership of previous applications, strong UI/UX product sensibility, and experience in building dashboards or internal tools will be valuable assets. Additionally, the ability to adapt to ambiguity, communicate technical decisions to non-engineers, and a passion for clean code and impactful work are crucial for success in this role. If you are a highly motivated individual with a passion for AI-driven applications and a desire to lead the development of a cutting-edge fleet/energy platform, then this role at EMO Energy is the perfect opportunity for you. Join us in revolutionizing the future of urban mobility and energy infrastructure in India.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You are seeking a hands-on backend expert to elevate your FastAPI-based platform to the next level by developing production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. In this role, you will be responsible for various key areas: 1. Core Backend Enhancements: - Building APIs - Strengthening security with OAuth2/JWT, rate-limiting, SecretManager, and enhancing observability through structured logging and tracing - Adding CI/CD, test automation, health checks, and SLO dashboards 2. Awesome UI Interfaces: - Developing UI interfaces using React.js/Next.js, Redact/Context, and various CSS frameworks like Tailwind, MUI, Custom-CSS, and Shadcn 3. LLM & Agentic Services: - Designing micro/mini-services to host and route to platforms such as OpenAI, Anthropic, local HF models, embeddings & RAG pipelines - Implementing autonomous/recursive agents that orchestrate multi-step chains for Tools, Memory, and Planning 4. Model-Inference Infrastructure: - Setting up GPU/CPU inference servers behind an API gateway - Optimizing throughput with techniques like batching, streaming, quantization, and caching using tools like Redis and pgvector 5. NLP & Data Services: - Managing the NLP stack with Transformers for classification, extraction, and embedding generation - Building data pipelines to combine aggregated business metrics with model telemetry for analytics You will be working with a tech stack that includes Python, FastAPI, Starlette, Pydantic, Async SQLAlchemy, Postgres, Docker, Kubernetes, AWS/GCP, Redis, RabbitMQ, Celery, Prometheus, Grafana, OpenTelemetry, and more. Experience in building production Python REST APIs, SQL schema design in Postgres, async patterns & concurrency, UI application development, RAG, LLM/embedding workflows, cloud container orchestration, and CI/CD pipelines is essential for this role. Additionally, experience with streaming protocols, NGINX Ingress, SaaS security hardening, data privacy, event-sourced data models, and other related technologies would be advantageous. This role offers the opportunity to work on evolving products, tackle real challenges, and lead the scaling of AI services while working closely with the founder to shape the future of the platform. If you are looking for meaningful ownership and the chance to solve forward-looking problems, this role could be the right fit for you.,

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

As a Lead Data Scientist at our company, you will play a crucial role in our AI/ML team by leveraging your deep expertise in Generative AI, large language models (LLMs), and end-to-end ML engineering. Your responsibilities will involve designing and developing intelligent systems using advanced NLP techniques and modern ML practices. You will be a key player in building and optimizing ML pipelines and AI systems across various domains, as well as designing and deploying RAG architectures and intelligent chatbots. Collaboration with cross-functional teams to integrate AI components into scalable applications will be essential, along with providing technical leadership, conducting code reviews, and mentoring junior team members. You will drive experimentation with prompt engineering, agentic workflows, and domain-driven designs, while ensuring best practices in testing, clean architecture, and model reproducibility. To excel in this role, you must possess expertise in AI/ML, including Machine Learning, NLP, Deep Learning, and Generative AI (GenAI). Proficiency in working with the LLM stack, such as GPT, Chatbots, Prompt Engineering, and RAG, is required. Strong programming skills in Python, familiarity with essential libraries like Pandas, NumPy, and Scikit-learn, and experience with architectures like Agentic AI, DDD, TDD, and Hexagonal Architecture are essential. You should be comfortable with tooling and deployment using Terraform, Docker, REST/gRPC APIs, and Git, and have experience working on cloud platforms like AWS, GCP, or Azure. Familiarity with AI coding tools like Copilot, Tabnine, and hands-on experience with distributed training in NVIDIA GPU-enabled environments are necessary. A proven track record of managing the full ML lifecycle from experimentation to deployment is crucial for success in this position. Additionally, experience with vector databases, knowledge of GenAI frameworks like LangChain and LlamaIndex, contributions to open-source GenAI/ML projects, and skills in performance tuning of LLMs and custom models are considered advantageous. If you are passionate about leveraging AI technologies to deliver real-world solutions, we are excited to discuss how you can contribute to our cutting-edge AI/ML team.,

Posted 2 months ago

Apply

3.0 - 4.0 years

3 - 4 Lacs

Gurgaon, Haryana, India

On-site

Job Responsibilities We are seeking a highly strategic and execution-driven person to join the CEO's office as a (Program and Strategy Manager) and drive the adoption of Agentic AI (autonomous, goal-driven AI systems) across all functionsProduct, Tech, Marketing, Data, Sales, Customer Success, Delivery, Onboarding, HR, Finance, PR, Branding and more. You will act as a bridge across cross-functional teams to ensure alignment and drive the strategic direction of our AI-powered product portfolio. What will you do Cross-Functional Agentic AI Transformation Define and execute the company-wide high-impact agentic AI automation across functions.(e.g., AI-driven sales bots, automated customer onboarding, HR talent matching, finance forecasting). Develop metrics and KPIs to track AI-driven efficiency gains. Lead no-code AI tooling initiatives (e.g., GPT-based automation, AI agents, RPA, AutoML) to empower non-technical teams. Partner with Engineering & Data teams to integrate AI into existing workflows. Program Management and Strategy Develop, implement, and monitor key strategic initiatives that align with the company's overall business objectives. Define, track, and own key business KPIs, ensuring execution of high-impact priorities. Design and lead cross-functional projects to drive business outcomes, such as revenue growth, customer acquisition, and operational efficiency Prepare executive reports, investor decks, and MBR presentations. Provide strategic assistance and support to the senior leadership team Team & Stakeholder Management Act as the bridge between the CEO's Office and department heads to drive AI adoption. Conduct workshops to upskill teams on AI tools and best practices. Manage vendor partnerships (OpenAI, Microsoft, Google AI, etc.) for AI tooling. What you must have 3+ years of experience, preferably in Product, Program Management, or Strategy roles. Expertise in analytics, excel, SQL, and BI tools (Tableau, Looker, Power BI, etc.) Basic familiarity with LLM APIs (e.g., OpenAI, Anthropic, Hugging Face) Technical background with ability to collaborate effectively with ML/AI engineering teams. Exceptional communication skills to explain technical AI concepts to non-technical stakeholders. Excellence in strategic thinking, problem-solving, and decision-making. Analytical mindset with the ability to define and measure success metrics. Ability to thrive in a fast-paced, ambiguous environment.

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Gurgaon, Haryana, India

On-site

Build and own the full voice bot pipeline including ASR, NLU, dialog management, tool calling, and TTS. Architect systems using MCP to connect ASR, memory, APIs, and LLMs in real-time. Implement RAG to ground responses using data from knowledge bases, inventory, and FAQs. Design scalable vector search systems for memory embedding and FAQ handling. Engineer low-latency ASR and TTS pipelines, optimizing for natural turn-taking. Apply fine-tuning, LoRA, and instruction tuning to reduce hallucinations and align model tone. Build observability systems and QA pipelines to monitor calls and analyze model behavior. Collaborate with cross-functional teams to scale the voice bot to thousands of users. Design modular, observable, and resilient AI systems. Implement retrieval pipelines, function calls, and prompt chaining across workflows. Expertly chunk, embed, and retrieve documents in RAG systems. Debug latency issues and optimize for low round-trip time. Trace hallucinations to root causes and fix via guardrails or tool access. Build prototypes using open-source or hosted tools with speed and flexibility. 5+ years in AI/ML or voice/NLP with real-time experience. Deep knowledge of LLM orchestration, vector search, and prompt engineering. Experience with ASR (Whisper, Deepgram), TTS (ElevenLabs, Coqui), and OpenAI models. Skilled in latency optimization and real-time audio pipelines. Hands-on with Python, FastAPI, vector DBs, and cloud platforms.

Posted 2 months ago

Apply

2.0 - 5.0 years

8 - 17 Lacs

Noida

Work from Office

Role and Responsibilities Develop and fine-tune LLMs using techniques like RAG, transfer learning, or domain-specific adaptation Build AI agents using frameworks like LangChain or CrewAI to manage dynamic and multi-step workflows Work with vector databases (e.g., Pinecone) to enable semantic search and retrieval Design and maintain ETL pipelines and ensure smooth data preprocessing and transformation Implement NLP solutions for tasks like intent detection, sentiment analysis, and content generation Develop backend APIs and services using Python frameworks like FastAPI or Flask Contribute to scalable microservice-based architectures Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 2 to 4 years in AI/ML development and backend system Machine Learning Fundamentals: Strong grasp of algorithms, model training, evaluation, and tuning Generative AI Models: Experience working with LLMs, RAG architecture, and fine-tuning techniques LangChain or Similar Frameworks: Hands-on experience building AI workflows using toolkits like LangChain Natural Language Processing (NLP): Proficiency in text analytics, classification, tokenization, embeddings Vector Databases: Practical use of tools like Pinecone, FAISS, or similar for retrieval-augmented generation Big Data Handling: Ability to work with large datasets, optimize storage, and processing pipelines SQL/NoSQL: Experience in querying and managing structured and unstructured data Python & API Development: Proficiency in Python and frameworks like FastAPI or Flask ETL & Data Preprocessing: Strong understanding of building pipelines for clean and efficient data processing Soft Skills: Strong problem-solving, communication, and collaboration abilities Good-to-Have Skills: Agentic AI Tools: Exposure to CrewAI or similar platforms for orchestrating multi-agent interactions Content Structuring: Experience in clustering, topic modeling, or organizing unstructured data ETL Enhancements: Advanced optimization techniques for faster and more efficient pipelines Domain Exposure: Prior work on projects involving customer insights, chat summarization, or sentiment analysis Role & responsibilities

Posted 2 months ago

Apply

5.0 - 8.0 years

8 - 18 Lacs

Noida, Greater Noida, Delhi / NCR

Work from Office

Role & responsibilities Develop and fine-tune LLMs using techniques like RAG, transfer learning, or domain-specific adaptation Build AI agents using frameworks like LangChain or CrewAI to manage dynamic and multi-step workflows Work with vector databases (e.g., Pinecone) to enable semantic search and retrieval Design and maintain ETL pipelines and ensure smooth data preprocessing and transformation Implement NLP solutions for tasks like intent detection, sentiment analysis, and content generation Develop backend APIs and services using Python frameworks like FastAPI or Flask Contribute to scalable microservice-based architectures Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 2 to 4 years in AI/ML development and backend system Machine Learning Fundamentals: Strong grasp of algorithms, model training, evaluation, and tuning Generative AI Models: Experience working with LLMs, RAG architecture, and fine-tuning techniques LangChain or Similar Frameworks: Hands-on experience building AI workflows using toolkits like LangChain Natural Language Processing (NLP): Proficiency in text analytics, classification, tokenization, embeddings Vector Databases: Practical use of tools like Pinecone, FAISS, or similar for retrieval-augmented generation Big Data Handling: Ability to work with large datasets, optimize storage, and processing pipelines SQL/NoSQL: Experience in querying and managing structured and unstructured data Python & API Development: Proficiency in Python and frameworks like FastAPI or Flask ETL & Data Preprocessing: Strong understanding of building pipelines for clean and efficient data processing Soft Skills: Strong problem-solving, communication, and collaboration abilities

Posted 2 months ago

Apply

8.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Senior Manager - Senior Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are seeking a highly experienced Senior data scientist with 8+ years of expertise in machine learning, focusing on NLP, Generative AI, and advanced LLM ecosystems. This role demands leadership in designing and deploying scalable AI systems leveraging the latest advancements such as Google ADK, Agent Engine, and Gemini LLM. You will spearhead building real-time inference pipelines and agentic AI solutions that power complex, multi-user applications with cutting-edge technology. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 7+ years in ML engineering, applied AI, or senior data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Practical knowledge of LLM inference scaling with tools like vLLM, Groq, Triton Inference Server, and Google ADK. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization. Exposure to event-driven architectures or streaming pipelines (Kafka, Redis).

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Lead Assistant Manager - Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are looking for a motivated Data Scientist with 5+ years of experience in machine learning and data science, focusing on NLP and Generative AI. You will contribute to the design, development, and deployment of AI solutions centered on Large Language Models (LLMs) and agentic AI technologies, including Google ADK, Agent Engine, and Gemini. This role involves working closely with senior leadership to build scalable, real-time inference systems and intelligent applications. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 5+ years in ML engineering, applied AI, or data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization.

Posted 2 months ago

Apply

7.0 - 12.0 years

0 - 0 Lacs

Indore, Bengaluru

Work from Office

Required Skills & Experience: 4+ years of experience in penetration testing, red teaming or offensive security. 1+ years working with AI/ML or LLM-based systems. Deep familiarity with LLM architectures (e.g., GPT, Claude, Mistral, LLaMA) and pipelines (e.g., LangChain, Haystack, RAG-as-a-Service). Strong understanding of embedding models, vector databases (Pinecone, Weaviate, FAISS), and API-based model deployments. Experience with adversarial ML, secure inference, and data integrity in training pipelines. Experience with red team infrastructure and tooling such as Cobalt Strike, Mythic, Sliver, Covenant, and custom payload development. Proficient in scripting languages such as Python, PowerShell, Bash or Go.

Posted 2 months ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant - AI/ML Seeking an experienced GenAI/ML Engineer to integrate LLM APIs, build AI-driven applications, optimize model performance, and deploy AI services at scale. The ideal candidate has expertise in Python-based AI development, LLM orchestration, cloud deployment, and enterprise AI integration. Major focus should be at Gemni as CVS is GCP shop. Responsibilities . AI Application Development - Build and maintain Python-based AI services using LangChain, and CrewAI. Implement RAG-based retrieval and Agentic AI workflows. . LLM Integration & Optimization - Integrate Gemni, OpenAI, Azure OpenAI APIs. Optimize API calls using temperature, reduce hallucinations using embedding-based retrieval (FAISS, Pinecone). . Model Evaluation & Performance Tuning - Assess AI models using Model Scoring, fine-tune embeddings, and enhance similarity search for retrieval-augmented applications. . API & Microservices Development - Design scalable RESTful APIs services. Secure AI endpoints using OAuth2, JWT authentication, and API rate limiting. . Cloud Deployment & Orchestration - Deploy AI-powered applications using AWS Lambda, Kubernetes, Docker, CI/CD pipelines. Implement LangChain for AI workflow automation. . Agile Development & Innovation - Work in Scrum teams, estimate tasks accurately, and contribute to incremental AI feature releases. Qualifications we seek in you! Minimum Qualifications . BE /B.Tech/M.Tech/MCA . Experience in AI/ML: PyTorch, TensorFlow, Hugging Face, Pinecone . Experience in LLMs & APIs: OpenAI, LangChain, CrewAI . Experience in Cloud & DevOps: AWS, Azure, Kubernetes, Docker, CI/CD . Experience in Security & Compliance: OAuth2, JWT, HIPAA Preferred qualifications . Experience in AI/ML, LLM integrations, and enterprise cloud solutions . Proven expertise in GenAI API orchestration, prompt engineering, and embedding retrieval . Strong knowledge of scalable AI architectures and security best practices Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

10.0 - 17.0 years

35 - 65 Lacs

Bengaluru

Hybrid

Role Overview: As Principal Data Engineer, you will drive the architecture and technical direction for MontyClouds next-generation data and knowledge platforms, enabling intelligent automation, advanced analytics, and AI-driven products for a wide range of users. You will play a pivotal role in shaping the data foundation for AI-driven systems, ensuring our platform is robust, scalable, and ready to support state-of-the-art AI workflows. You will also lead the efforts in maintaining stringent data security standards, safeguarding sensitive information throughout data pipelines and platforms. Key Responsibilities: Architect and optimize scalable data platforms that support advanced analytics, AI/ML capabilities, and unified knowledge access. Lead the design and implementation of high-throughput data pipelines and data lakes for both batch and real-time workloads. Set technical standards for data modeling, data quality, metadata management, and lineage tracking, with a strong focus on AI-readiness. Design and implement secure, extensible data connectors and frameworks for integrating customer-provided data streams. Build robust systems for processing and contextualizing data, including reconstructing event timelines and enabling higher-order intelligence. Partner with data scientists, ML engineers, and cross-functional stakeholders to operationalize data for machine learning and AI-driven insights. Evaluate and adopt best-in-class tools from the modern AI data stack (e.g., feature stores, orchestration frameworks, vector databases, ML pipelines). Drive innovation and continuous improvement in data engineering practices, data governance, and automation. Provide mentorship and technical leadership to the broader engineering team. Champion security, compliance, and privacy best practices in multi-tenant, cloud-native environments. Desired Skills Must Have Deep expertise in cloud-native data engineering (AWS preferred), including large-scale data lakes, warehouses, and event-driven/data streaming architectures. Hands-on experience building and maintaining data pipelines with modern frameworks (e.g., Spark, Kafka, Airflow, dbt). Strong track record of enabling AI/ML workflows, including data preparation, feature engineering, and ML pipeline operationalization. Familiarity with modern AI/ML data stack components such as feature stores (e.g., Feast), vector databases (e.g., Pinecone, Weaviate), orchestration tools (e.g., Airflow, Prefect), and ML ops tools (e.g., MLflow, Tecton). Experience working with modern open table formats such as Apache Iceberg, Delta Lake, or Hudi for scalable data lake and lakehouse architectures. Experience implementing data privacy frameworks such as GDPR and supporting data anonymization for diverse use cases. Strong understanding of data privacy, RBAC, encryption, and compliance in multi-tenant platforms. Good to Have Experience with metadata management, semantic layers, or knowledge of graph architectures. Exposure to SaaS and multi-cloud environments serving both internal and external consumers. Background in supporting AI Agents or AI-driven automation in production environments. Experience processing high-volume cloud infrastructure telemetry, including AWS CloudTrail, CloudWatch logs, and other event-driven data sources, to support real-time monitoring, anomaly detection, and operational analytics. Experience 10+ years of experience in data engineering, distributed systems, or related fields. Education Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (preferred).

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies