Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
delhi
On-site
As a Senior GenAI Engineer at NTT DATA in Delhi, Haryana (IN-HR), India, your role will involve designing, building, and productionizing GenAI and agentic systems on hyperscalers like Azure, AWS, and GCP. You will lead the development lifecycle from problem framing to MLOps deployment, while also mentoring junior engineers and collaborating with product, design, and clients. Key Responsibilities: - Design & build GenAI/agentic systems including chat copilots, workflow/graph agents, and tool use - Implement chunking, hybrid search, vector stores, re-ranking, feedback loops, and continuous data quality/evaluation - Select, integrate, and finetune LLMs & multimodal models - Apply prompt-engineering techniques to specific use cases - Work on solutions based on LLM, NLP, DL, ML, object detection/classification, etc. - Have a clear understanding of CI/CD, configuring guardrails, and PII redaction - Collaborate with clients/stakeholders from multiple geographies - Stay informed about the latest advancements in Gen AI, machine learning, and AI technologies Qualifications: - Bachelors/Masters Degree or equivalent - 8+ years in software/ML engineering, with 1.5+ years hands-on experience with LLMs/GenAI and agentic frameworks - Proven track record of shipping production AI systems on at least one hyperscaler - Experience leading teams and owning end-to-end delivery Required Skills: - Strong Python experience for building AI-ML/GenAI Solutions - Experience with leading frameworks like LangGraph, LangChain, Semantic Kernel, CrewAI, AutoGen - Strong experience with Vector DBs like Pinecone, Milvus, Redis/pgvector - Working experience with hybrid search, re-rankers, evaluation, and observability - Proficiency in SQL Query, NLP, CV, Deep Learning Algorithms - Experience with Open Source Models - Knowledge of UI/UX is an added advantage About NTT DATA: NTT DATA is a $30 billion global innovator of business and technology services, serving 75% of the Fortune Global 100. With diverse experts in over 50 countries, NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, as well as the development and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure and is part of the NTT Group, investing significantly in R&D to support organizations and society in the digital future. Visit us at us.nttdata.com,
Posted 14 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Role Overview: As a Senior Consultant in TMT Business Consulting Finance at EY, you will have the opportunity to work with Technology, Media & Entertainment, and Telecommunications organizations to help them evolve, transform, and stay competitive in the industry. Your role will involve creating compelling employee and customer experiences, ensuring operational excellence, safeguarding data and reputation, and supporting M&A strategies. You will play a key part in shaping the future of the technology revolution and contributing to building a better working world for all. Key Responsibilities: - Develop end-to-end GenAI solutions using LLMs (OpenAI, Claude, Llama, Gemini, etc.) - Build and manage multi-agent orchestration using frameworks like AutoGen, CrewAI, Semantic Kernel - Design document chunking, embedding, and indexing pipelines using Pinecone, Weaviate, FAISS, pgvector - Optimize RAG pipelines for latency, relevance, and safety - Create reusable prompt templates for zero-shot, few-shot, and chain-of-thought applications - Fine-tune open-source models using LoRA, QLoRA, or supervised datasets when needed - Containerize AI workflows using Docker/Kubernetes and deploy on Azure/AWS/GCP - Set up CI/CD pipelines, logging, monitoring, and automated evaluation metrics (hallucination, toxicity) - Maintain clean code, modular utilities, internal documentation, and reusable accelerator components for the GenAI Factory Qualifications Required: - 3-5 years of practical experience in building GenAI or Agentic AI applications with measurable business outcomes - Strong Python skills (asyncio, FastAPI, LangChain, Pydantic), basic JavaScript/TypeScript for integration plugins - Experience with LangChain, LlamaIndex, AutoGen, CrewAI, Semantic Kernel, and Hugging Face ecosystem - Expertise in machine learning algorithms and deep learning techniques (e.g., BERT, GPT, LSTM, CNN, RNN) - Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Engineering, or a related field Additional Company Details: At EY, you will have the opportunity to work with over 200,000 clients globally and be part of a team of 300,000 professionals, including 33,000 individuals in India. EY is committed to investing in the skills and learning of its employees, offering personalized Career Journeys and access to career frameworks to enhance roles, skills, and opportunities. The organization is dedicated to being an inclusive employer, striving to achieve a balance between delivering excellent client service and supporting the career growth and well-being of its employees.,
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
As a Data Engineer, your main objective will be to build data pipelines for crawling, parsing, and connecting external systems and interfaces. This includes developing crawling and fetching pipelines using an API-first approach and tools like playwright and requests. You will also be responsible for parsing and normalizing job postings and CVs, implementing deduplication and delta logic, and working on embeddings and similarity search. Additionally, you will be involved in integrating with various systems such as HR4YOU, SerpAPI, BA job board, and email/SMTP. Your role will also require you to work on batch and stream processing using Azure Functions or container jobs, implementing retry/backoff strategies, and setting up dead-letter queues for error handling. Monitoring data quality metrics such as freshness, duplicate rate, coverage, and cost per 1,000 items will be crucial. You will collaborate with the frontend team for data exports and admin configuration, ensuring seamless data flow across the system. The ideal candidate for this role should have at least 4 years of experience in backend/data engineering. Proficiency in Python, especially with FastAPI, pydantic, httpx/requests, and Playwright/Selenium, as well as solid experience in TypeScript for smaller services and SDKs is required. Familiarity with Azure services like Functions/Container Apps, Storage/Blob, Key Vault, and Monitor/Log Analytics is essential. Experience with messaging systems like Service Bus/Queues, databases such as PostgreSQL and pgvector, and clean ETL/ELT patterns is highly desirable. Knowledge of testability using pytest, observability with OpenTelemetry, and NLP/IE experience with tools like spaCy, regex, and rapidfuzz will be advantageous. Moreover, experience with license/ToS-compliant data retrieval, captcha/anti-bot strategies, and a working method focused on API-first approach, clean code, and trunk-based development will be beneficial. Familiarity with tools like GitHub, Docker, GitHub Actions/Azure DevOps, pnpm/Turborepo, Jira/Linear, and Notion/Confluence is a plus. This role may involve rotating on-call support responsibilities and following the "you build it, you run it" approach to ensure operational efficiency and accountability.,
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Requirements 45 years of experience as an ML Engineer or Applied Scientist in production Strong Python skills (FastAPI/Flask), with PyTorch or TensorFlow Proficient in building data pipelines and deploying ML systems Experience with LLMs, embeddings, RAG systems, and vector DBs (PGVector/Postgres preferred) Hands-on with multiple LLM providers (OpenAI, Anthropic, Google, open-source) Knowledge of containerization and deployment (Docker, Kubernetes, AWS/GCP, CI/CD) Bonus: Familiarity with LangGraph/LangChain, vLLM, Ray, LLM eval tools, or event-driven systems Responsibilities Build and scale ML pipelines for Novas AI teammates Implement agentic AI workflows with LLMs and orchestration frameworks Design and optimize RAG pipelines for performance and accuracy Build connectors to integrate AI with external systems Fine-tune models for domains like finance, legal, and tax Monitor model performance, optimize for cost, latency, and reliability Lead best practices for scalable, production-grade ML Job Details Location: Mulund, Mumbai Interview process Round 1: Technical screening Round 2: ML System Design Round 3: Coding and Hands-on Evaluation Round 4: Cultural Fit and Founder Discussion Show more Show less
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Data Science is about pioneering new approaches to help businesses answer critical questions by analyzing massive data sets. As a Senior Advisor on the Data Science Team in Bangalore, you will play a crucial role in shaping methodologies and models to extract meaningful insights from petabyte-scale data. Collaborating with experts, engineers, and academics, you will empower customers to derive valuable insights from large data sets. Your primary responsibilities will include contributing to business strategies, influencing decision-making through deep analysis, and providing actionable recommendations based on complex data interpretations. You will design processes to extract insights from unstructured data, develop predictive models and algorithms, and work closely with stakeholders to deliver actionable business insights. Key Responsibilities: - Collaborate with internal and external teams to understand customer needs and propose solutions - Engage with external clients to gather project requirements and share analytical insights - Conduct data exploration and preparation for model development and validation - Apply statistical, machine learning, and business intelligence techniques to deliver actionable insights - Solution, build, deploy, and monitor models effectively Requirements: - Minimum 8 years of experience in NLP, Machine Learning, Computer Vision, and GenAI - Proficiency in data visualization tools like Power BI, matplotlib, and plotly - Hands-on experience with CNN, LSTM, YOLO, SQL, Postgres SQL, PGVector, and ChromaDB - Strong understanding of MLOps and ML lifecycle management - Expertise in large language models, prompt engineering, and NLP tasks Desired Skills: - Knowledge of streaming/messaging frameworks like Kafka, RabbitMQ, ZeroMQ - Familiarity with cloud platforms such as AWS, Azure, GCP - Experience with web technologies and frameworks like HTTP, REST, GraphQL, Flask, Django - Proficiency in programming languages like Java or JavaScript Join Dell Technologies to work with cutting-edge technology and drive impactful change. We value diversity and are committed to providing an inclusive work environment where everyone can thrive and grow. If you are passionate about leveraging data science to make a difference, apply now and take the first step towards a rewarding career with us. Application closing date: 8 August 2025 Job ID: R260848,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
NTT DATA is looking for a Python Full Stack + JAVA Microservices developer to join their team in Chennai, Tamil Ndu, India. As part of this role, you will be responsible for the development, testing, and maintenance of software applications and systems. You will also lead the planning and design of product and technical initiatives and mentor team members. Your duties will include driving improvements in engineering techniques, standards, practices, and processes across the department, fostering a culture of knowledge sharing and collaboration. You will collaborate with team members to ensure deliverables are of high quality, optimized, and adhere to performance standards. Additionally, you will engage with internal stakeholders to understand user requirements and prepare design documents to be shared with the development team. Participating in Agile planning and estimation activities, breaking down large tasks into smaller ones, resolving team queries, and escalating issues to team leads when necessary are also part of your responsibilities. You will provide technical guidance to the team, implement reusable frameworks, manage the environment, and design layouts. Mentoring junior team members, supporting interviews, and evaluations will also be expected of you. NTT DATA is a global innovator of business and technology services, serving 75% of the Fortune Global 100. Committed to helping clients innovate, optimize, and transform for long-term success, they have experts in over 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure, investing heavily in research and development to help organizations and society move confidently into the digital future.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
thane, maharashtra
On-site
As a Senior Backend Engineer/ Technical Architect on our B2C Platform team in Thane, Maharashtra, you will play a crucial role in designing and developing scalable backend services. With over 5 years of experience in backend development, you will leverage your expertise in Java (Spring Boot) and Python (FastAPI/Django) to make architectural decisions related to infrastructure, real-time data pipelines, and platform observability. Your responsibilities will include integrating Large Language Models (LLMs) and AI-driven workflows into backend systems, collaborating with cross-functional teams, leading system design reviews, and mentoring junior engineers. You will have a deep hands-on experience in building microservices, event-driven architectures, and streaming pipelines, along with proficiency in databases and caching tools such as Redis, PostgreSQL, and PGVector. Preferred skills include working with real-time infrastructure, exposure to AI-driven product features, and a track record of building high-performance consumer platforms from scratch. Additionally, familiarity with frontend tools like ReactJS, DevOps practices (e.g., Docker/Kubernetes, Prometheus, Grafana), and cloud platforms like AWS/GCP is desired. In this role, you will contribute to building a robust and scalable backend for our next-gen B2C platform, implementing intelligent, real-time features that personalize the user experience, and establishing a strong technical foundation to support long-term innovation and growth. If you are passionate about shaping the future of technology and solving meaningful problems at scale, we would love to hear from you.,
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We are seeking a Senior AI Engineer / Data Engineer to join our engineering team and help build the future of AI-powered business solutions. In this role, you&aposll be developing intelligent systems that leverage advanced large language models (LLMs), real-time AI interactions, and cutting-edge retrieval architectures. Your work will directly contribute to products that are reshaping how businesses operate-particularly in recruitment, data extraction, and intelligent decision-making. This is an exciting opportunity for someone who thrives in building production-grade AI systems and working across the full stack of modern AI technologies. Responsibilities Design, build, and optimize AI-powered systems using multi-modal architectures (text, voice, visual). Integrate and deploy LLM APIs from providers such as OpenAI, Anthropic, and AWS Bedrock. Build and maintain RAG (Retrieval-Augmented Generation) systems with hybrid search, re-ranking, and knowledge graphs. Develop real-time AI features using streaming analytics and voice interaction tools (e.g., ElevenLabs). Build APIs and pipelines using FastAPI or similar frameworks to support AI workflows. Process and analyze unstructured documents with layout and semantic understanding. Implement predictive models that power intelligent business recommendations. Deploy and maintain scalable solutions using AWS services (EC2, S3, RDS, Lambda, Bedrock, etc.). Use Docker for containerization and manage CI/CD workflows and version control via Git. Debug, monitor, and optimize performance for large-scale data pipelines. Collaborate cross-functionally with product, data, and engineering teams. Qualifications 5+ years of experience in AI/ML or data engineering with Python in production environments. Hands-on experience with LLM APIs and frameworks such as OpenAI, Anthropic, Bedrock, or LangChain. Production experience using vector databases like PGVector, Weaviate, FAISS, or Pinecone. Strong understanding of NLP, document extraction, and text processing. Proficiency in AWS cloud services including Bedrock, EC2, S3, Lambda, and monitoring tools. Experience with FastAPI or similar frameworks for building AI/ML APIs. Familiarity with embedding models, prompt engineering, and RAG systems. Asynchronous programming knowledge for high-throughput pipelines. Experience with Docker, Git workflows, CI/CD pipelines, and testing best practices. Preferred Background in HRTech or ATS integrations (e.g., Greenhouse, Workday, Bullhorn). Experience working with knowledge graphs (e.g., Neo4j) for semantic relationships. Real-time AI systems (e.g., WebRTC, OpenAI Realtime API) and voice AI tools (e.g., ElevenLabs). Advanced Python development skills using design patterns and clean architecture. Large-scale data processing experience (1-2M+ records) with cost optimization techniques for LLMs. Event-driven architecture experience using AWS SQS, SNS, or EventBridge. Hands-on experience with fine-tuning, evaluating, and deploying foundation models. Show more Show less
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, the mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. In collaboration with leading academics, industry experts, and highly skilled engineers, the goal is to equip customers to generate sophisticated new insights from the biggest of big data. Join the team to do the best work of your career and make a profound social impact as an Advisor on the Data Science Team in Bangalore. As a Data Science Advisor, you will contribute to the business strategy and influence decision-making based on information gained from deep dive analysis. You will produce actionable and compelling recommendations by interpreting insights from complex data sets. Designing processes to consolidate and examine unstructured data to generate actionable insights will be part of your responsibilities. Additionally, you will partner with business leaders, engineers, and industry experts to construct predictive models, algorithms, and probability engines. You will: - Partner with internal and external teams to understand customer requirements and develop proposals. - Conduct interactions with external customers to gather project requirements, provide status updates, and share analytical insights. - Implement preliminary data exploration and data preparation steps for model development/validation. - Apply a broad range of techniques and theories from statistics, machine learning, and business intelligence to deliver actionable business insights. - Solution, build, deploy, and set up monitoring for models. Qualifications: - 6+ years of related experience with proficiency in NLP, Machine Learning, Computer Vision, and GenAI. - Working experience in data visualization (e.g., Power BI, matplotlib, plotly). - Hands-on experience with CNN, LSTM, YOLO, and database skills including SQL, Postgres SQL, PGVector, and ChromaDB. - Proven experience in MLOps and LLMOps, with a strong understanding of ML lifecycle management. - Expertise with large language models (LLMs), prompt engineering, fine-tuning, and integrating LLMs into applications for natural language processing (NLP) tasks. Desirable Skills: - Strong product/technology/industry knowledge and familiarity with streaming/messaging frameworks (e.g., Kafka, RabbitMQ, ZeroMQ). - Experience with cloud platforms (e.g., AWS, Azure, GCP). - Experience with web technologies and frameworks (e.g., HTTP/REST/GraphQL, Flask, Django). - Skilled in programming languages like Java or JavaScript. Dell Technologies is committed to providing equal employment opportunities for all employees and creating a work environment free of discrimination and harassment. If you are looking for an opportunity to grow your career with advanced technology and some of the best minds in the industry, this role might be the perfect fit for you. Join Dell Technologies to build a future that works for everyone because Progress Takes All of Us.,
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture. Show more Show less
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a DataOps Engineer, you will play a crucial role within our data engineering team, blending elements of software engineering, DevOps, and data analytics. Your primary responsibility will involve the development and maintenance of secure, scalable, and high-quality data pipelines and infrastructure to support our clients" advanced analytics, machine learning, and real-time decision-making needs. Your key responsibilities will include designing, developing, and managing robust ETL/ELT pipelines utilizing Python and modern DataOps methodologies. Additionally, you will implement data quality checks, pipeline monitoring, and error handling mechanisms. Building data solutions on AWS cloud services like S3, ECS, Lambda, and CloudWatch will be an integral part of your role. Furthermore, containerizing applications with Docker and orchestrating them using Kubernetes for scalable deployments will be part of your daily tasks. You will work with infrastructure-as-code tools and CI/CD pipelines to automate deployments efficiently. Moreover, designing and optimizing data models using PostgreSQL, Redis, and PGVector for high-performance storage and retrieval will be essential. Supporting feature stores and vector-based storage for AI/ML applications will also fall under your responsibilities. You will drive Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. Additionally, reviewing pull requests (PRs), conducting code reviews, and enforcing security and performance standards will be part of your routine. Collaborating closely with product owners, analysts, and architects to refine user stories and technical requirements will also be crucial for the success of the projects. As for the required skills and qualifications, we are looking for someone with at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles focusing on data products. Proficiency in Python, Docker, Kubernetes, and AWS (especially S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector would be a strong advantage. Deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is also required. Experience working in Agile/Scrum environments with excellent collaboration and communication skills is a must. A passion for developing clean, well-documented, and scalable code in a collaborative environment is highly valued. Familiarity with DataOps principles, encompassing automation, testing, monitoring, and deployment of data pipelines, is also beneficial.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
NTT DATA is looking for a talented and passionate individual to join our team in Chennai, Tamil Nadu, India as a Python Full Stack + JAVA Microservices Developer. As a part of our team, you will be responsible for the development, testing, and maintenance of software applications and systems. You will lead the planning and design of product and technical initiatives, as well as mentor developers and team members. Your role will involve driving improvements in engineering techniques, standards, practices, and processes across the department, fostering a culture of knowledge sharing and collaboration. You will collaborate with team members to ensure high-quality deliverables that are optimized and adhere to performance standards. Additionally, you will engage with key internal stakeholders to understand user requirements and prepare low-level design documents for the development team. Participating in Agile planning and estimation activities will be a crucial part of your responsibilities, where you will break down large tasks into smaller ones. You will also be responsible for resolving team queries and escalating issues to the team lead if clarification is required from the customer. Providing technical guidance to the team, leading and resolving issues, and implementing reusable frameworks will be key aspects of your role. Mentoring junior team members, supporting interviews, and evaluations will also be part of your responsibilities. If you are looking to be part of an inclusive, adaptable, and forward-thinking organization, NTT DATA is the place for you. Join us in our mission to help clients innovate, optimize, and transform for long-term success. Apply now and be a part of our diverse team of experts in over 50 countries. Visit us at nttdata.com to learn more about our services and commitment to moving confidently into the digital future.,
Posted 2 months ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a skilled, motivated, and quick-learning Full Stack Developer to join our team dedicated to cutting-edge Gen AI development work. As a Full Stack Developer, you will be responsible for creating innovative applications and solutions that encompass both frontend and backend technologies. While our solutions often involve the use of Retrieval Augmented Generation (RAG) and Agentic frameworks, your role will extend beyond these technologies to encompass a variety of AI tools and techniques. Your responsibilities will include developing and maintaining web applications using Angular, NDBX frameworks, and other modern technologies. You will design and implement databases in Postgres DB, employ ingestion and retrieval pipelines utilizing pgvector and neo4j, and ensure the implementation of efficient and secure data practices. Additionally, you will work with various generative AI models and frameworks such as LangChain, Haystack, and LlamIndex for tasks like chucking, embeddings, chat completions, and integration with different data sources. You will collaborate with team members to integrate GenAI capabilities into applications, write clean and efficient code adhering to company standards, conduct testing to identify and fix bugs, and utilize collaboration tools like GitHub for effective team working and code management. Staying updated with emerging technologies and applying them to operations will be essential, showcasing a strong desire for continuous learning. Qualifications and Experience: - Bachelor's degree in Computer Science, Information Technology, or a related field with at least 6 years of working experience. - Proven experience as a Full Stack Developer with a focus on designing, developing, and deploying end-to-end applications. - Knowledge of front-end languages and libraries such as HTML/CSS, JavaScript, XML, and jQuery. - Experience with Angular and NDBX frameworks, as well as database technologies like Postgres DB and vector databases. - Proficiency in developing APIs following OpenAPI standards. - Familiarity with generative AI models on cloud platforms like Azure and AWS, including techniques like Retrieval Augmented Generation, Prompt engineering, Agentic RAG, and Model context protocols. - Experience with collaboration tools like GitHub and docker images for packaging applications. At Allianz, we believe in fostering a diverse and inclusive workforce. We are proud to be an equal opportunity employer that values bringing your authentic self to work, regardless of background, appearance, preferences, or beliefs. Together, we can create an environment where everyone feels empowered to explore, grow, and contribute to a better future for our customers and the global community. Join us at Allianz and let's work together to care for tomorrow.,
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are seeking a hands-on backend expert to elevate your FastAPI-based platform to the next level by developing production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. In this role, you will be responsible for various key areas: 1. Core Backend Enhancements: - Building APIs - Strengthening security with OAuth2/JWT, rate-limiting, SecretManager, and enhancing observability through structured logging and tracing - Adding CI/CD, test automation, health checks, and SLO dashboards 2. Awesome UI Interfaces: - Developing UI interfaces using React.js/Next.js, Redact/Context, and various CSS frameworks like Tailwind, MUI, Custom-CSS, and Shadcn 3. LLM & Agentic Services: - Designing micro/mini-services to host and route to platforms such as OpenAI, Anthropic, local HF models, embeddings & RAG pipelines - Implementing autonomous/recursive agents that orchestrate multi-step chains for Tools, Memory, and Planning 4. Model-Inference Infrastructure: - Setting up GPU/CPU inference servers behind an API gateway - Optimizing throughput with techniques like batching, streaming, quantization, and caching using tools like Redis and pgvector 5. NLP & Data Services: - Managing the NLP stack with Transformers for classification, extraction, and embedding generation - Building data pipelines to combine aggregated business metrics with model telemetry for analytics You will be working with a tech stack that includes Python, FastAPI, Starlette, Pydantic, Async SQLAlchemy, Postgres, Docker, Kubernetes, AWS/GCP, Redis, RabbitMQ, Celery, Prometheus, Grafana, OpenTelemetry, and more. Experience in building production Python REST APIs, SQL schema design in Postgres, async patterns & concurrency, UI application development, RAG, LLM/embedding workflows, cloud container orchestration, and CI/CD pipelines is essential for this role. Additionally, experience with streaming protocols, NGINX Ingress, SaaS security hardening, data privacy, event-sourced data models, and other related technologies would be advantageous. This role offers the opportunity to work on evolving products, tackle real challenges, and lead the scaling of AI services while working closely with the founder to shape the future of the platform. If you are looking for meaningful ownership and the chance to solve forward-looking problems, this role could be the right fit for you.,
Posted 2 months ago
3.0 - 5.0 years
8 - 12 Lacs
Noida
Remote
We are looking for a skilled Python Developer with a strong background in web application development using frameworks like Django, Flask, and FastAPI. The ideal candidate should also possess hands-on experience in building Generative AI (GenAI) applications leveraging modern AI/ML libraries and frameworks such as LangGraph, LangChain, and LLMs, and integrating them with vector databases like Pinecone or PGVector. Key Responsibilities Web Development: Design, develop, and maintain scalable RESTful APIs using Django, Flask, or FastAPI. Build and deploy modern web applications that are performant, modular, and secure. Implement API authentication and authorization using OAuth2, JWT, or session-based approaches. Work with frontend teams to integrate APIs and ensure smooth end-to-end flows. Follow CI/CD and Git-based deployment workflows using tools like GitHub/GitLab, Jenkins, or Docker. GenAI Application Development: Build intelligent applications using LangGraph and LangChain for orchestrating LLM workflows Integrate OpenAI, Anthropic, HuggingFace, or custom LLMs into production workflows. Design and optimize RAG (Retrieval Augmented Generation) pipelines using vector databases such as Pinecone, PGVector etc Database and Backend Integration: Work with relational databases like PostgreSQL and MySQL. Write efficient and scalable queries for large-scale datasets. Experience with AWS/GCP/Azure is a plus. Required Skills Minimum of 3 years of experience application development using Python/django Proficient in developing and consuming RESTful API Experience with LangGraph, LangChain, and GenAI tools/workflows Experience with LLMs (OpenAI GPT-4, Claude, Llama, etc.) Good understanding of software design principles, code modularity, and version control (Git)
Posted 2 months ago
6.0 - 9.0 years
0 - 3 Lacs
Pune, Chennai, Bengaluru
Hybrid
Primary: Postgres, PgVector, Vectorized database, SQL Experience - 6 years - 9 years Location - Pune / Mumbai / Chennai / Bangalore Working exp with DB: Postgres, hive, MS SQL Experience with scripting : python, UNIX shell, spark Experience in working with AI project added advantage
Posted 3 months ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
Your future duties and responsibilities: Skill: pgvector,Vertex AI, FastAPI, Flask, Kubernetes Develops and optimizes AI applications for production, ensuring seamless integration with enterprise systems and front-end applications. Builds scalable API layers and microservices using FastAPI, Flask, Docker, and Kubernetes to serve AI models in real-world environments Implements and maintains AI pipelines with MLOps best practices, leveraging tools like Azure ML, Databricks, AWS SageMaker, and Vertex AI Ensures high availability, reliability, and performance of AI systems through rigorous testing, monitoring, and optimization Works with agentic frameworks such as LangChain, LangGraph, and AutoGen to build adaptive AI agents and workflows Experience with GCP, AWS, or Azure - utilizing services such as Vertex AI, Bedrock, or Azure Open AI model endpoints Hands on experience with vector databases such as pgvector, Milvus, Azure Search, AWS OpenSearch, and embedding models such as Ada, Titan, etc. Collaborates with architects and scientists to transition AI models from research to fully functional, high-performance production systems. Skills: Azure Search Flask Kubernetes
Posted 3 months ago
2.0 - 5.0 years
3 - 7 Lacs
Faridabad
Work from Office
Hiring AI & Data Retrieval Engineer with expertise in NLQ, Text-to-SQL, LLMs, LangChain, pgVector, PostgreSQL, vector search, Python, AI libraries, Agentic AI & API integration. Exp with NLP, RAG, BI tools, live projects & LLM fine-tuning preferred.
Posted 3 months ago
14.0 - 20.0 years
40 - 50 Lacs
gurugram, bengaluru
Hybrid
Job Title: Senior Principal Data Engineer Location: Gurgaon Work Schedule: 12:00 PM 8:30 PM IST Job Type: Full-Time Job Summary: We are seeking an experienced Senior Principal Data Engineer with a strong background in AI/ML , Generative AI , and cloud-native architectures . This role involves leading the design and implementation of advanced data and AI solutions using AWS, mentoring technical teams, and driving innovation through emerging technologies. Key Responsibilities: Architect and design Generative AI solutions leveraging AWS services (e.g., Bedrock, S3, PGVector, Kendra, SageMaker). Collaborate with engineering teams throughout the software development lifecycle to ensure robust and scalable solutions. Lead technical decision-making and resolve complex AI/ML challenges. Conduct solution reviews and ensure alignment with best practices and security policies. Guide solution governance and secure necessary architectural approvals. Integrate emerging technologies and frameworks (e.g., LangChain) into solution designs. Deliver technical presentations, workshops, and knowledge-sharing sessions. Create and maintain architectural documentation and design specifications. Mentor junior engineers and contribute to a culture of continuous learning. Partner with data scientists and analysts to enable effective model development and deployment. Coordinate with stakeholders and clients to align data architecture with business objectives. Stay current with industry trends in AI, machine learning, data engineering, and cloud technologies. Required Qualifications: 12–15 years of experience in software development and architecture. Proven expertise in designing and delivering AI/ML and data-driven solutions. Deep understanding of AWS cloud services , especially: Bedrock SageMaker Kendra S3 PGVector Strong programming skills in Python (required), with additional experience in Java or Scala preferred. Solid foundation in data structures , algorithms , and software design patterns . Experience in building ETL/data pipelines and working with diverse data types (structured, unstructured, semi-structured). Understanding of DevOps practices and CI/CD pipelines . Familiarity with Generative AI frameworks such as LangChain . Knowledge of AI ethics , bias mitigation, and responsible AI principles. Nice to Have: Experience working with large-scale enterprise data systems. Exposure to cloud governance and architectural review boards. Certifications in AWS or AI/ML technologies.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |