Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
9.0 - 14.0 years
32 - 37 Lacs
Hyderabad
Work from Office
Mandate Skills : Python + 2.5+ 3 Years of current experience in GEN(GENERATIVE) AI (ARTIFICIAL INTELLIGENCE) Location – Hyderabad Notice - Immediate to 30 Days Budget - 34-38 LPA "Kashif@d2nsolutions.com"
Posted 1 week ago
4 - 9 years
6 - 16 Lacs
Noida
Hybrid
Hexaware is conducting Walkin Interview for Data Scientist (GENAI)/ Lead Data Scientist (GENAI)/ Data Scientist Architect (GENAI) _Noida Location_12th April 2025 (Saturday) We urgently looking for Immediate joiners/Early joiners. Interested Candidates can share CV at umaparvathyc@hexaware.com MUST HAVE 1. Strong experience in Data Scientist (GENAI) 2. Strong hands-on experience in GenAI LLM models (ChatGPT, LLAMA 2, etc.), Vector databases, LangChain, LangGraph, and LlamaIndex, Azure/AWS, Bedrock, GPT-4. 3. Strong in python, Machine learning, deep Learning architecture, NLP, and OCR. Primary Skills Good understanding of GenAI LLM models (ChatGPT, LLAMA 2, etc.), Vector databases, LangChain, and LlamaIndex. Hands-on experience with Deep Learning architecture, NLP, and OCR. Python Fast API experience, SDA based implementations for all the APIs Architect should be hand-on to review the code developed by the developers Should be able to take and work on spike stories assigned to him/her.
Posted 2 months ago
3 - 8 years
10 - 15 Lacs
Bengaluru
Work from Office
We are Urgently looking for Data Engineer for our Bangalore office. Exp - 3+ yrs into relevant field Mandatory - Python , FASTAPI , Rest API's , Data pipeline Work Mode - Hybrid (2 Days WFO) Responsibilities •Build, maintain, and optimize data pipelines and storage solutions to ensure data quality, accessibility, and performance for automation testing and AI applications. •Develop, implement, and maintain Python-based solutions for ETL processes, data transformation, and system integration. •Integrate data engineering workflows with AI and machine learning applications, ensuring compatibility and scalability. •Design and implement REST APIs and WebSocket connections for real-time data transfer and integration. •Collaborate closely with AI and R&D teams to support new AI features, ensuring seamless integration with data systems. •Work on distributed systems to ensure scalability and reliability for high-volume data processing tasks. •Implement and optimize data solutions for structured and unstructured data, including SQL and NoSQL databases. •Participate in system design and architecture discussions to create high-performance, scalable, and maintainable systems. Job Description We are looking for a Data Engineer who is passionate about building scalable, high-performance data solutions. The ideal candidate should have strong Python programming skills and hands-on experience in data engineering tasks, including data pipelines, distributed systems, and database management (SQL and NoSQL). You will work in a collaborative environment, focusing on implementing code day-in and day-out while ensuring compatibility with AI and automation testing workflows. Skills 1. Python 2. Problem Solving and Programming 3. Data Engineering 4. GenerativeAI 5. REST APIs 6. WebSockets 7. UI and Integration 8. Data Pipelines 9. SQL & NoSQL Databases 10. Distributed Systems 11. High & Low-level System Design 12. Data Warehouses and Lakes 13. AIOps 14. RPA What We are Looking For • Hands-on Python Expertise : Ability to write efficient, maintainable, and scalable code daily. • Implementation-Oriented : A focus on delivering working code and solutions, not just theoretical designs. • Team Collaboration : A proactive team player who thrives in a collaborative environment. • Data Expertise : Solid knowledge of data transformation, storage, and real-time data integration techniques. This role is a fantastic opportunity for individuals who enjoy working closely with data, building practical solutions, and pushing the boundaries of automation testing with AI. Thank You, Rajashri QUINNOX
Posted 3 months ago
8 - 13 years
20 - 35 Lacs
Chennai, Pune, Bengaluru
Hybrid
The Senior Manager, Data Science will lead a team of data scientists to develop advanced analytical models and drive data-driven decision-making across the organization. This role requires a strong blend of technical expertise, strategic thinking, and leadership skills. The Senior Manager will collaborate with cross-functional teams to identify business opportunities, design and implement innovative data solutions, and contribute to the companys overall strategic goals. Key Responsibilities: Leadership and Team Management: Lead, mentor, and develop a team of data scientists. Foster a culture of innovation, collaboration, and continuous learning. Set clear goals and performance expectations for the team. Strategy and Vision: Define and implement the data science strategy aligned with the companys objectives. Identify and prioritize data science projects that drive business value. Stay abreast of industry trends and emerging technologies to keep the organization at the forefront of data science. Technical Expertise: Oversee the development and deployment of machine learning models and algorithms. Ensure best practices in data modeling, coding, and experimental design. Provide technical guidance on complex data science projects. Collaboration and Communication: Work closely with business leaders to understand their needs and translate them into data science initiatives. Communicate findings and recommendations to both technical and non-technical stakeholders. Collaborate with data engineering, IT, and other departments to ensure data availability and integrity. Project Management: Manage multiple data science projects, ensuring timely and successful delivery. Allocate resources effectively and balance team workloads. Monitor project progress and make adjustments as needed. Performance Measurement: Develop metrics to measure the impact and effectiveness of data science initiatives. Continuously evaluate and refine models and strategies based on performance data. Report on key performance indicators to senior management. Qualifications: Education: Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. Masters or Ph.D. preferred. Experience: 7+ years of experience in data science or a related field. 3+ years in a leadership or managerial role within a data science team. Proven track record of delivering impactful data science solutions in a business context. Technical Skills: Expertise in statistical analysis, machine learning, and predictive modeling. Proficiency in programming languages such as Python, R, or SQL. Experience with data visualization tools like Tableau, Power BI, or similar. Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure). Soft Skills: Strong analytical and problem-solving abilities. Excellent communication and presentation skills. Ability to translate complex technical concepts into actionable business insights. Strong organizational and project management skills. Preferred Qualifications: Experience in [specific industry or domain relevant to the company]. Knowledge of deep learning, NLP, or other advanced AI techniques. Experience with Agile methodologies and project management tools.
Posted 3 months ago
5 - 10 years
20 - 35 Lacs
Bengaluru
Work from Office
JD: Machine Learning Engineer for Bangalore location: Role Overview We are seeking a skilled and innovative Machine Learning Engineer with expertise in Large Language Models (LLMs) to join our team. The ideal candidate should have hands-on experience developing, fine-tuning, and deploying LLMs, alongside a deep understanding of the machine learning lifecycle. Candidate must have basic knowledge of JAVA to read the codes. This role involves building scalable AI solutions, collaborating with cross-functional teams, and contributing to cutting-edge AI initiatives. Key Responsibilities Model Development & Optimization: - Develop, fine-tune, and deploy LLMs like OpenAI's, GPT, Anthropic's Claude, Googles Gemini, or AWS Bedrock. - Customize pre-trained models for specific use cases, ensuring high performance and scalability. Machine Learning Pipeline Design: - Build and maintain end-to-end ML pipelines, from data preprocessing to model deployment. - Optimize training workflows for efficiency and accuracy. Integration & Deployment: - Work closely with software engineering teams to integrate ML solutions into production environments. - Ensure APIs and solutions are scalable and robust. Experimentation & Research: - Experiment with new architectures, frameworks, and approaches to improve model performance. - Stay updated with advancements in LLMs and generative AI technologies. Collaboration: - Collaborate with cross-functional teams, including data scientists, engineers, and product managers, to align ML solutions with business goals. - Provide mentorship to junior team members as needed. Required Qualifications Experience: At least 5 years of professional experience in machine learning or AI development. Proven experience with LLMs and generative AI technologies. Technical Skills: Proficiency in Python (required) and/or Java (bonus). Hands-on experience with APIs and tools like OpenAI, Anthropic's Claude, Google Gemini, or AWS Bedrock. Familiarity with ML frameworks such as TensorFlow, PyTorch, or Hugging Face. Strong understanding of data structures, algorithms, and distributed systems. Cloud Expertise: Experience with AWS, GCP, or Azure, including services relevant to ML workloads (e.g., AWS SageMaker, Bedrock). Data Engineering: Proficiency in handling large-scale datasets and implementing data pipelines. Experience with ETL tools and platforms for efficient data preprocessing. Problem Solving: Strong analytical and problem-solving skills, with the ability to debug and resolve issues quickly. Preferred Qualifications Experience with multi-modal models and generative AI for images, text, or other modalities. Understanding of ML Ops principles and tools (e.g. MLflow, Kubeflow). Familiarity with reinforcement learning and its applications in AI. Knowledge of distributed training techniques and tools like Horovod or Ray. Advanced degree (Masters or Ph.D.) in Computer Science, Machine Learning, or a related field. Most important points: Must be OK for independent contributor role Thorough expertise in LLMs Able to join within 15 days It is a complete work from office role (Whitefield, Bangalore) Interview process: Total 3 rounds of interview: Round 1: Technical round on call Round 2: Technical round + Coding test on video call Round 3: Managerial + HR round
Posted 3 months ago
5 - 8 years
20 - 25 Lacs
Bengaluru
Hybrid
Company Description When youre one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modelling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit epsilon.com/apac or our LinkedIn page. Job Description: About BU The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon’s success story. Why are we looking for you? Primary role of the SSE is to envision and build internet scale services on Cloud using Python and GenerativeAI. We are seeking an experienced and innovative Senior Software Engineer to lead the design, development, and deployment of scalable Python applications integrated with advanced Generative AI capabilities. In this role, you will work on AWS Bedrock models, Retrieval-Augmented Generation (RAG), multi-agent systems, and real-time feedback loops. Additionally, you'll develop dynamic, interactive front-end solutions using Angular UI frameworks. You’ll collaborate with teams across various functions to create next-gen AI-powered software products that address complex challenges. What will you enjoy in this role? Will focus on designing, developing, and supporting all our online data solutions. This person will work closely with business Managers to design and build innovative solutions. What You'll Do: AI Integration & Development: Develop and optimize applications integrating Generative AI models, including AWS Bedrock models, Retrieval-Augmented Generation (RAG), and multi-agent systems, to enhance user experience and business processes. Backend Development: Design, implement, and maintain scalable and high-performance Python applications that utilize machine learning and AI models to solve real-world problems. AI Feedback Loops: Design and integrate real-time feedback systems to continuously improve the performance and accuracy of AI models, ensuring user-centric enhancements. Multi-Agent Systems: Develop and optimize multi-agent architectures where different AI agents collaborate and interact to achieve complex goals. Ensure efficient communication and decision-making across agents. Frontend Development: Collaborate with UI/UX teams to develop intuitive, scalable, and responsive web applications using Angular, enhancing user interaction with AI systems. Cloud Infrastructure: Leverage AWS services (specifically AWS Bedrock) and other cloud technologies to deploy, scale, and manage AI models in production, ensuring reliability and performance. System Design & Architecture: Lead the design of robust and scalable system architectures, ensuring seamless integration of front-end and back-end components with AI models and cloud infrastructure. Collaboration & Mentorship: Work closely with product managers, data scientists, and other engineers to understand business requirements, translate them into technical solutions, and mentor junior engineers. Continuous Learning & Innovation: Stay up-to-date with the latest trends in Generative AI, AWS Bedrock, multi-agent systems, and front-end technologies to continuously improve your skillset and bring innovative solutions to the team. Qualifications A bachelor’s degree in B.E/ B.Tech/ M.Tech in computer science or a related field or have equivalent experience. Experience : 5+ years of experience in Python software development with a strong focus on backend systems, cloud-native applications, or AI-powered solutions. Generative AI Expertise : Proven experience working with Generative AI technologies, including language models (e.g.,GPT-3/4), image generation models (e.g., GANs), or similar AI applications. Frameworks & Libraries : Proficiency in Python frameworks like Flask, Django, FastAPI, or others. Familiarity with machine learning libraries such as TensorFlow, PyTorch, or Hugging Face Transformers. AI/ML Deployment : Hands-on experience with deploying AI/ML models at scale in production environments using cloud platforms like AWS, GCP, or Azure. Cloud Technologies : Experience with cloud services (AWS, Google Cloud, Azure) for building scalable and resilient solutions. Software Design & Architecture : Strong understanding of software design patterns, microservices architecture, RESTful API development, and scalability principles. Problem Solving : Strong analytical and problem-solving skills, with the ability to break down complex tasks and deliver solutions in a timely manner. Collaboration & Communication : Excellent communication skills and the ability to work effectively in cross-functional teams, with a focus on mentoring and leadership. Nice-to-Have: Advanced AI Knowledge : Familiarity with cutting-edge Generative AI models (e.g., large-scale pre-trained models, deep reinforcement learning). DevOps & CI/CD : Experience with DevOps practices, containerization (Docker), and CI/CD pipelines for AI/ML workflows. Data Engineering Skills : Familiarity with data pipelines, ETL processes, and database technologies (SQL, NoSQL).
Posted 3 months ago
7 - 12 years
35 - 50 Lacs
Pune
Hybrid
Overview: We are looking for a hands-on, full-cycle AI/ML Engineer who will play a central role in developing a cutting-edge AI agent platform. This platform is designed to automate and optimize complex workflows by leveraging large language models (LLMs), retrieval-augmented generation (RAG), knowledge graphs, and agent orchestration frameworks. As the AI/ML Engineer, you will be responsible for building intelligent agents from the ground up including prompt design, retrieval pipelines, fine-tuning models, and deploying them in a secure, scalable cloud environment. Youll also implement caching strategies, handle backend integration, and prototype user interfaces for internal and client testing. This role requires deep technical skills, autonomy, and a passion for bringing applied AI solutions into real-world use. Key Responsibilities: Design and implement modular AI agents using large language models (LLMs) to automate and optimize a variety of complex workflows Deploy and maintain end-to-end agent/AI workflows and services in cloud environments, ensuring reliability, scalability, and low-latency performance for production use Build and orchestrate multi-agent systems using frameworks like LangGraph or CrewAI, supporting context-aware, multi-step reasoning and task execution Develop and optimize retrieval-augmented generation (RAG) pipelines using vector databases (e.g., Qdrant, Pinecone, FAISS) to power semantic search and intelligent document workflows Fine-tune LLMs using frameworks such as Hugging Face Transformers, LoRA/PEFT, DeepSpeed, or Accelerate to create domain-adapted models Integrate knowledge graphs (e.g., Neo4j, AWS Neptune) into agent pipelines for context enhancement, reasoning, and relationship modeling Implement cache-augmented generation strategies using semantic caching and tools like Redis or vector similarity to reduce latency and improve consistency Build scalable backend services using FastAPI or Flask and develop lightweight user interfaces or prototypes with tools like Streamlit, Gradio, or React Monitor and evaluate model and agent performance using prompt testing, feedback loops, observability tools, and safe AI practices Collaborate with architects, product managers, and other developers to translate problem statements into scalable, reliable, and explainable AI systems Stay updated on the latest in cloud platforms (AWS/GCP/Azure), software frameworks, agentic frameworks, and AI/ML technologies Prerequisites: Strong Python development skills, including API development and service integration Experience with LLM APIs (OpenAI, Anthropic, Hugging Face), agent frameworks (LangChain, LangGraph, CrewAI), and prompt engineering Experience deploying AI-powered applications using Docker, cloud infrastructure (Azure preferred), and managing inference endpoints, vector DBs, and knowledge graph integrations in a live production setting Proven experience with RAG pipelines and vector databases (Qdrant, Pinecone, FAISS) Hands-on experience fine-tuning LLMs using PyTorch, Hugging Face Transformers, and optionally TensorFlow, with knowledge of LoRA, PEFT, or distributed training tools like DeepSpeed Familiarity with knowledge graphs and graph databases such as Neo4j or AWS Neptune, including schema design and Cypher/Gremlin querying Basic frontend prototyping skills using Streamlit or Gradio, and ability to work with frontend teams if needed Working knowledge of MLOps practices (e.g., MLflow, Weights & Biases), containerization (Docker), Git, and CI/CD workflows Cloud deployment experience with Azure, AWS, or GCP environments Understanding of caching strategies, embedding-based similarity, and response optimization through semantic caching Preferred Qualifications: Bachelor’s degree in Technology (B.Tech) or Master of Computer Applications (MCA) is required; MS in similar field preferred 7–10 years of experience in AI/ML, with at least 2 years focused on large language models, applied NLP, or agent-based systems Demonstrated ability to build and ship real-world AI-powered applications or platforms, preferably involving agents or LLM-centric workflows Strong analytical, problem-solving, and communication skills Ability to work independently in a fast-moving, collaborative, and cross-functional environment Prior experience in startups, innovation labs, or consulting firms a plus Compensation: The compensation structure will be discussed during the interview
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2