Jobs
Interviews

1443 Generative Ai Jobs - Page 28

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

12 - 22 Lacs

New Delhi, Pune, Bengaluru

Work from Office

Job Description We are seeking a talented and passionate GenAI Developer with strong expertise in Python and cloud exposure to design, develop, and deploy cutting-edge generative AI solutions. The ideal candidate will be instrumental in leveraging GCP's AI services to build scalable, robust, and performant GenAI applications that solve complex business problems. The candidate on the full lifecycle of GenAI projects, from ideation and prototyping to deployment and optimization, contributing to the advancement of our AI capabilities. Roles & Responsibilities Responsibilities: Design, develop, and implement generative AI applications using Python and GCP's AI/ML services (e.g., Vertex AI, GenAI APIs, Google Kubernetes Engine, Cloud Functions, BigQuery). Collaborate with product managers, data scientists, and other engineers to understand requirements and translate them into technical specifications for GenAI solutions. Stay up-to-date with the latest advancements in generative AI, large language models (LLMs), and related technologies. Develop and integrate APIs and services for seamless interaction between GenAI models and existing systems. Participate in code reviews, contribute to architectural discussions, and promote best practices within the team. Document technical designs, processes, and model specifications. Required Skills & Qualifications: 3+ years of professional experience in software development. Expert proficiency in Python and its relevant libraries Strong hands-on experience with GenAI APIs (Gemini and others) Experience with RESTful APIs and microservices architecture. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication and collaboration skills. Good to Have Skills & Experience: Exposure to Large Language Models (LLMs) Experience with Retrieval-Augmented Generation (RAG) frameworks and implementations Familiarity with LangChain Cognizance on Cloud Storage, Cloud Functions / Cloud Run, Cloud Logging / Monitoring Mandatory Skills - Vertex AI, GenAI APIs, Google Kubernetes Engine

Posted 1 month ago

Apply

6.0 - 10.0 years

18 - 27 Lacs

Bengaluru

Work from Office

Roles and Responsibilities : Design, develop, and deploy generative AI models using machine learning algorithms to generate high-quality text data. Collaborate with cross-functional teams to integrate generative AI solutions into existing systems and applications. Develop end-to-end implementations of generative AI projects from conceptualization to deployment on Microsoft Azure platform. Troubleshoot issues related to model performance, data quality, and scalability. Job Requirements : 6-10 years of experience in Data Science or a related field with expertise in Generative AI/Machine Learning. Strong proficiency in programming languages such as Python or R; experience with TensorFlow, PyTorch, or Hugging Face libraries is desirable. Experience working with cloud-based platforms like Microsoft Azure; knowledge of containerization (e.g., Docker) is an added advantage.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 - 1 Lacs

Chennai

Work from Office

Experience: 3 - 5 Years Location: Chennai Employment Type: Full-time Job Description: We are seeking a highly skilled and motivated AI/ML Engineer with 35 years of hands-on experience in developing and deploying intelligent systems. As part of our dynamic AI team, you will work on cutting-edge projects involving Machine Learning , Natural Language Processing (NLP) , and Generative AI technologies. This role requires strong programming skills, a deep understanding of AI frameworks, and experience in building scalable ML pipelines and APIs. Key Responsibilities: Design, develop, and deploy machine learning and deep learning models to solve real-world problems. Work on NLP tasks such as text classification and sentiment analysis. Build and maintain scalable RAG (Retrieval-Augmented Generation) pipelines for Generative AI applications. Develop GenAI applications using foundation model APIs (e.g., vertex AI, OpenAI, llama). Build and deploy FastAPI services to serve models and integrate with external systems. Optimize and containerize models using Docker for efficient deployment. Collaborate with cross-functional teams to integrate AI models into production systems. Perform data analysis using pandas and other Python libraries to extract meaningful insights. Stay up-to-date with the latest AI/ML research and best practices. Required Skills: Python Machine Learning Natural Language Processing (NLP) Generative AI Docker Deep Learning pandas Retrieval-Augmented Generation (RAG) FastAPI Foundation Model APIs (e.g., OpenAI, Gemini, Claude) Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or related field. Experience working in agile development environments. Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Hyderabad

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Pune

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

9.0 - 14.0 years

50 - 70 Lacs

Bengaluru

Remote

Staff/Sr. Staff Engineer Experience: 6 - 15 Years Exp Salary : Competitive Preferred Notice Period : Within 60 Days Shift : 10:00AM to 6:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow OR LLMs OR MLOps OR Generative AI and Python Netskope (One of Uplers' Clients) is Looking for: Staff/Sr. Staff Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary: Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required skills and experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS or equivalent required, MSCS or equivalent strongly preferred How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Job Description Product Manager Virtual Agent Responsibilities- Responsible for operational management of support teams tools and services supporting the Service Desk, including virtual agent, survey, translations and scheduling. Responsible for ServiceNow Virtual Agent and utilizing VA to simultaneously improve user experience and reduce higher cost interactions with the technology service desk Possess an in-depth and broad understanding of industry standards and aligns roadmap for support teams to them Manages the release management of any technologies with vendors Drives innovation into the support processes and teams through experimentation with new technologies and a roadmap to move into production Participate in industry best practices, vendor reviews, and lessons learned to bring innovation to the Kenvue end user experience Qualifications- Required- B.A / B.S degree, preferably in Science, Technology, Engineering or Math (STEM). Minimum of 8 years of relevant technology experience. Strong analytical skills and demonstrated experience driving operational improvements. In depth experience with ServiceNow. Experience achieving reductions in support cost/demand through deployment of chatbot solutions. Experience in developing and managing operational service improvement efforts. Excellent stakeholder management skills. Customer and Employee Experience oriented. Preferred- Experience working in large and / or global organizations. Prior experience in IT End User Services / Client Services. Experience utilizing generative AI in chatbot and/or service desk use cases. Required- B.A / B.S degree, preferably in Science, Technology, Engineering or Math (STEM). Minimum of 8 years of relevant technology experience. Strong analytical skills and demonstrated experience driving operational improvements. In depth experience with ServiceNow. Experience achieving reductions in support cost/demand through deployment of chatbot solutions. Experience in developing and managing operational service improvement efforts. Excellent stakeholder management skills. Customer and Employee Experience oriented. Primary Location Asia Pacific-India-Karnataka-Bangalore Job Function Operations (IT) Job Qualifications Qualifications-,

Posted 1 month ago

Apply

6.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Remote

Job Title: Senior Business Analyst AI/Healthcare Product Delivery Location: Remote / Hyderabad (Flexible) Type: Full-time | Immediate Joiner Preferred Role Overview We are seeking a Senior Business Analyst to lead business-side coordination, delivery planning, and stakeholder alignment for a high-impact AI-driven healthcare platform . The ideal candidate will possess a strong foundation in healthtech , AI/ML product ecosystems , and agile delivery practices , with a proven ability to manage both technical and business-facing stakeholders in a fast-paced, evolving environment. Key Responsibilities Act as the primary interface between business stakeholders and the technical team, translating vision into actionable requirements. Develop and maintain clear user stories, functional specs, flow diagrams, and process maps. Coordinate across teams involved in LLMs , RAG pipelines , GPT-based models , vector DBs , and deployment infrastructure (AWS, Hugging Face, Lambda Labs). Drive release planning , sprint reviews , retrospectives , and end-user validation . Monitor delivery progress, identify risks, and proactively resolve blockers. Ensure product decisions are data-driven and clinically validated , where applicable. Coordinate with subject matter experts (SMEs) to gather and structure domain-specific datasets (e.g., CDC , WHO , PubMed , NICE ). Ensure compliance with HIPAA and other relevant data handling protocols. Align product quality and performance expectations through evaluation metrics and testing frameworks . Skills & Qualifications 6–10 years’ experience as a Business Analyst or Product Owner , preferably in healthcare or AI-based platforms . Familiarity with Generative AI , LLMs (GPT, LLaMA), and RAG architecture is a must. Experience working in agile environments with tools like JIRA , Confluence , Notion , Miro . Strong understanding of software development cycles , backend workflows , and cloud infrastructure (e.g., AWS EC2/S3, Hugging Face endpoints). Excellent communication, stakeholder management, and problem-solving skills. Ability to work cross-functionally with data engineers , QA , DevOps , SMEs , and leadership . Preferred (Not Mandatory) Exposure to HIPAA-compliant systems and medical AI tools . Prior experience managing AI model delivery or collaborating with AI/ML engineers . Experience with evaluation metrics , benchmarking , and prompt optimization for LLMs. To Apply Email your resume to: sreemith.kushal@ekshvaku.com Subject line: Application – Senior Business Analyst

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

navi mumbai, maharashtra

On-site

Position : Junior Software Developer Location : Vashi, Navi Mumbai Responsibilities Assist in designing, developing, and testing machine learning models. Work with data engineering teams to collect, clean, and format data for analysis. Implement current machine learning algorithms and experiment with new techniques. Document and present model development processes and results to key stakeholders. Contribute to improving existing AI functionalities within our products. Stay updated with AI/ML advancements and suggest potential integrations. Who We're Looking For Individuals with a strong background in Python, SQL, and a passion for Machine Learning, NLP, and Generative AI. Proficiency in interacting with databases (MySQL) using Python Experience with sentiment analysis, topic modeling, Hugging Face models, RAG, LLM fine-tuning, and Stability AI models. Familiarity with creating and managing APIs, with experience in tools like Swagger for documentation. Fully committed individuals not currently pursuing any academic programs. (ref:hirist.tech),

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

thane, maharashtra

On-site

We are looking for a highly skilled AI and 3D Computer Vision Developer to join our innovative team. Your main focus will be on developing advanced algorithms for spatial perception, 3D computer vision, and sensor data analytics. As part of this role, you will be responsible for designing and implementing cutting-edge AI solutions for object detection, tracking, and scene understanding. Additionally, you will work on sensor data analytics for various applications. Your expertise in machine learning, deep learning, and computer vision will be crucial in creating real-time applications. Strong proficiency in spatial analytics, sensor fusion, and data-driven problem-solving is essential for success in this role. If you are passionate about pushing the boundaries of AI and 3D perception, we are excited to hear from you! To be considered for this position, you should have a minimum of 4+ years of overall work experience. Ideal candidates will have experience in areas such as spatial or scene understanding, data analytics, and advanced AI technologies. This includes expertise in detection, tracking, segmentation, recognition, handling large volumes of sensor data, and hands-on experience with Large Language Models, Generative AI, and other cutting-edge artificial intelligence technologies.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have an understanding of configuring Genesys PureCloud and PureCloud BYOC Cloud. Design Genesys Architect flow using various Genesys tools, Google Dialog, and Voice Bots. Ensure proper coordination of software application design, development, testing, quality assurance, configuration, installation, and support to ensure smooth, stable, and timely implementation of work requests and issue resolution. Deep functional and technical understanding of APIs and related integration with external NLU services. Functional and technical understanding of building API-based integrations with Salesforce, MS Dynamics, etc. You should have 5 to 10 years of experience implementing multichannel self-service/IVR and Omni-Channel Routing Ige/PureCloud suite. Hands-on experience on Voice and Non-Voice (SMS, Email, Chat, etc.) applications solutioning using Genesys Cloud Architect. Experience in Genesys Infrastructure including QM, Dialer, Genesys WFM, Agent Assist, Text and Speech analytics, agent gamification. A Bachelor's Degree or equivalent in Computer Science, MIS, Engineering, or a directly related field is required, with a Master's Degree preferred. Ability to effectively communicate in both technical and business terms. Hands-on extensive experience in Genesys, Genesys Cloud platform. Ability to multitask, meet tight deadlines, shift and adjust work effort based on critical priorities and SLAs. Experience with Voice/Chat Bot technologies such as Genesys, Google DF, Lex, Nuance, etc. Generative AI experience is a plus. Key Skills required for this role include Genesys PureCloud, PureCloud BYOC Cloud, Genesys tools, Google Dialog, Voice Bots, QM, Dialer, Genesys WFM, Agent Assist, Text and Speech analytics, agent gamification, and Generative AI. Location for this position is CHN. The minimum required experience is 6 years. The contact person for this role is RMG, and the email ID is RMG@SERVION.COM.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining TekWissen as an AI Specialist/Engineer based in Chennai. Your primary focus will be on building and deploying AI-powered Copilots, intelligent agents, and chatbots. Leveraging your expertise in AI techniques such as natural language processing (NLP), generative AI, and cognitive automation, you will develop solutions to enhance user experience, automate tasks, and provide intelligent assistance. Your responsibilities will include understanding business requirements, designing and implementing AI models, and deploying them, thus directly impacting our AI initiatives within a focused timeframe. Your key responsibilities will involve designing, developing, and deploying AI-powered Copilots and intelligent agents to automate tasks, provide proactive assistance, and enhance user productivity. Additionally, you will build and deploy chatbots using NLP techniques for conversational interfaces in various applications. You will be required to gather requirements, translate them into technical specifications for AI models and applications, develop and fine-tune AI algorithms and models, and integrate AI solutions with existing systems deploying them in production environments. Furthermore, you will conduct experiments to evaluate AI model performance and optimize them for accuracy, efficiency, and scalability. To excel in this role, you should possess knowledge in developing and deploying AI solutions using NLP, generative AI, and cognitive automation. You must have proven experience in building and deploying chatbots and/or intelligent agents, proficiency in programming languages such as Python, R, or similar languages commonly used in AI development, exposure to AI frameworks and libraries like TensorFlow, PyTorch, scikit-learn, and familiarity with cloud computing platforms such as AWS, Azure, GCP for deploying AI models in production environments. Ideally, you should have experience in chatbot development, Co-Pilot experience using Python, and hold a Bachelor's degree in computer science, Artificial Intelligence, Machine Learning, or a related field. TekWissen Group is an equal opportunity employer that supports workforce diversity.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Python Full Stack Developer based in Bangalore, with over 5 years of experience, you will be responsible for utilizing your expertise in Python, React, and SQL to contribute to the development of innovative solutions. While not mandatory, it would be beneficial to have knowledge in Data Science, specifically in classical regression, machine learning, deep learning, association rules, sequence analysis, cluster analysis, computer vision, and natural language processing. Your role will involve working with deep learning frameworks like PyTorch and TensorFlow, along with handling large-scale training and inference processes. Additionally, you will be expected to have familiarity with Generative AI and Large Language Models, including prompting and finetuning techniques. Understanding software engineering methodologies, design patterns, IT enterprise architectures, and cloud solutions will be essential for successfully translating business requirements into mathematical models and data science objectives to achieve measurable outcomes. Your strong analytical and problem-solving skills will be put to good use, as you collaborate with cross-functional teams and present your findings effectively. Excellent interpersonal skills will be crucial for interacting with colleagues across geographical boundaries. This position requires candidates to be based in Bangalore. If you are passionate about leveraging your expertise in React.js, data engineering, and other relevant technologies to drive impactful business solutions, we look forward to having you on board.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You are a talented B2B Copywriter and Storyteller with 2-3 years of experience, ideally in the IT SaaS sector or a marketing agency. Your role involves crafting compelling narratives and persuasive copy that effectively resonates with the target audience. Possessing certifications in ChatGPT or generative AI will be advantageous for this position. Your responsibilities include developing and implementing engaging B2B content strategies that are aligned with the brand voice and business objectives. You will be tasked with creating clear, persuasive copy for various marketing channels such as websites, emails, blogs, whitepapers, case studies, social media, videos, webinars, and cold emails. Collaboration with marketing, product, and sales teams is essential to produce content that drives lead generation, customer engagement, and brand awareness. Additionally, you will be translating complex technical concepts into impactful narratives, conducting thorough research on the industry, products, and target audience, and utilizing ChatGPT and generative AI tools to enhance content creation and foster innovation. Ensuring SEO optimization and adherence to best practices, as well as editing and proofreading content for quality and consistency, are crucial aspects of your role. Staying updated on industry trends, emerging technologies, and best practices in B2B marketing and storytelling is also expected. The ideal candidate for this position should have 2-3 years of experience as a B2B copywriter, preferably within the IT SaaS sector or a marketing agency. Strong storytelling and copywriting skills demonstrated through a portfolio of B2B content are essential. A solid understanding of B2B marketing principles and best practices, experience with SEO and content optimization techniques, familiarity with ChatGPT or other generative AI tools, and certifications in these areas are advantageous. Excellent research skills, the ability to grasp complex technical concepts quickly, attention to detail, a commitment to high-quality work, the capacity to work collaboratively in a fast-paced environment, and effective project management skills are key requirements. Strong communication skills and a proactive attitude are also necessary for success in this role. This is a remote position with working hours from 2:30 PM to 11:30 PM.,

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities: Understand the requirements from the business and translate it into an appropriate technical requirements Creating a detailed business analysis outlining problems opportunities and solutions for a business Perform activities related to data wrangling model building and model deployment Stay current with the latest research and technology and communicate your knowledge throughout the enterprise Lead initiatives to improve the team morale camaraderie and collaboration Technical Requirements: Generative AI AI ML Python Databricks Preferred Skills: Technology->Machine Learning->Generative AI

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Hi Techies, We at Xoriant are hiring for Senior and Lead Engineers for us. JD:- Knowledge of Generative AI technologies Experience in leveraging how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI powered tools, Spring AI, Agentic Systems, automating workflows, analyzing AI-driven insights. 5+ years of development experience as a Backend Engineer. (10+ years and experience in technically leading teams For senior associates) Expertise in Java 11+ which includes strong understanding of multithreading, memory management Strong knowledge of data structures and algorithms. Expertise in RESTful APIs, microservices architecture, cloud native deployment Good knowledge of databases (DynamoDB, MongoDB, ElasticSearch, MySQL, Oracle or any Cloud DB) Experience of CI/CD pipelines and DevOps practices Familiarity with messaging systems (Kafka/ Amazon Sqs/ JMS) Exposure to Authentication, Authorization using JWT or OAUTH. Should have experience in ORM tools preferably Hibernate / JPA We are highly giving preference to early joiners keeping in mind the urgency of the position. If you are interested to apply please share in the below details Total experience experience in core and advance java experience in generative AI(Hands On) Notice Period Current CTC Expected CTC

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Bengaluru

Remote

We're hiring a passionate Data Scientist / GenAI Engineer to join our AI-first team working on LLMs, RAG pipelines, NLP features, and GenAI use cases like chatbots, recommendation engines, and smart automation.

Posted 1 month ago

Apply

6.0 - 10.0 years

18 - 22 Lacs

Ahmedabad

Work from Office

Job purpose: Design & implement the best-engineered technical solutions using the latest technologies and tools. Who You Are: Lead Python development efforts for GenAI and data engineering projects from architecture to deployment. Build and optimize LLM-powered pipelines, APIs, and services using OpenAI, Hugging Face, or similar frameworks. Architect and implement scalable data pipelines, transformation logic, and feature engineering for AI models. Collaborate with ML/AI scientists, data engineers, and product teams to deliver end-to-end GenAI solutions. Drive the integration of LLMs into real-time applications (e.g., chatbots, summarizers, copilots). Design and maintain efficient, reusable, and reliable Python code using modern frameworks (FastAPI, Django). Ensure high code quality through unit testing, code reviews, and adherence to best practices. Optimize performance, cost, and scalability of AI-powered backend systems in cloud-native environments. Provide technical guidance and mentorship to junior engineers. What will excite us: 5+ years of professional experience in Python backend development. 2+ years of experience working with GenAI / LLMs (OpenAI, Hugging Face, LangChain, etc.). Hands-on experience with FastAPI, Django, or Flask for scalable API development. Strong experience in data engineering: ETL pipelines, data wrangling, feature engineering, and processing large datasets. Experience working with Vector Databases (e.g., Pinecone, Weaviate, FAISS) and embedding models. Solid grasp of asynchronous programming, RESTful API design, and microservices. Proficiency with SQL/NoSQL databases, data lakes, and cloud platforms (AWS/GCP/Azure). Familiarity with CI/CD pipelines, Docker, and container orchestration (Kubernetes is a plus). Excellent communication and leadership skills with experience mentoring teams or leading projects. What will excite you: Experience with LangChain, LLMOps, or RAG (Retrieval-Augmented Generation) systems. Exposure to MLOps tools like MLflow, Kubeflow, or Vertex AI. Knowledge of data governance, privacy, and compliance in AI applications. Contributions to open-source projects in Python, AI/ML, or data engineering domains. Work location: Ahmedabad (Work from Office)

Posted 1 month ago

Apply

5.0 - 7.0 years

11 - 19 Lacs

Pune

Work from Office

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 5+ years. Deep understanding of Generative AI fundamentals and transformer-based architectures. Strong experience in Cloud Architecture (e.g., AWS, Azure, GCP) for deploying scalable AI systems. Hands on working experience in working with Generative AI models. Strong working experience in Azure AI. Proven experience with BERT, GPT, LLaMA, and similar LLMs. Strong hands-on experience in prompt engineering and RAG techniques. Experience in fine-tuning and deploying models using frameworks like Hugging Face Transformers, LangChain, or equivalent. Familiarity with multi-agent AI systems and collaborative model workflows. Proficient in Python and machine learning libraries (e.g., PyTorch, TensorFlow). Experience integrating models into enterprise platforms and APIs. Understanding of ML Ops practices and CI/CD pipelines for AI deployment. Background in Natural Language Processing (NLP) and Knowledge Engineering. RESPONSIBILITIES: Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 1 month ago

Apply

7.0 - 10.0 years

15 - 25 Lacs

Pune

Work from Office

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 7+ years. Deep understanding of Generative AI fundamentals and transformer-based architectures. Strong experience in Cloud Architecture (e.g., AWS, Azure, GCP) for deploying scalable AI systems. Hands on working experience in working with Generative AI models. Strong working experience in Azure AI. Proven experience with BERT, GPT, LLaMA, and similar LLMs. Strong hands-on experience in prompt engineering and RAG techniques. Experience in fine-tuning and deploying models using frameworks like Hugging Face Transformers, LangChain, or equivalent. Familiarity with multi-agent AI systems and collaborative model workflows. Proficient in Python and machine learning libraries (e.g., PyTorch, TensorFlow). Experience integrating models into enterprise platforms and APIs. Understanding of ML Ops practices and CI/CD pipelines for AI deployment. Background in Natural Language Processing (NLP) and Knowledge Engineering. RESPONSIBILITIES: Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 1 month ago

Apply

5.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Hybrid

Job Summary: The Model Validation team is responsible for Model Risk Management for Bread Financial Models and Bank Models for the entire organization. Members of this team work with teams across the business (marketing, pricing, finance, etc.) that build the models to understand and validate the methodology. The Model Validation Analyst, Senior is a key member of the Model Risk Management (MRM) team. Key responsibilities include executing against the overall model risk management framework and assisting in independent validation of models used by the business. Essential Job Functions: Process and Project Management : Conducts model validations independently with minimal supervision. Acts as a mentor and coaches team members, if needed. Maintains a basic functional knowledge of model risk concepts and works with the business units to communicate key guiding principles. Performs end-to-end validations/testing of basic or less complex models/tools. Internal and client facing with oversight, guidance and review from management. Validation/testing: involves identifying the key risks associated with a model, planning a risk-based validation approach and scope. Designs and conducts validation tests, recognizing gaps in model risk management and governance, and drafting a model validation report. Quality Assurance : Checks the models accuracy and demonstrates the model is robust and stable. Assesses potential limitations and evaluates the models behavior over a range of input values. Assesses the impact of assumptions and identifying situations where the model performs poorly or becomes unreliable. Evaluates formal results of analysis performed and draft formal model validation documentation and proposed risk rating assessment. Business Communications and Relationships: Demonstrates professional and proficient verbal and written communication skills when working with internal and external partners. Builds relationships by establishing trust, confidence and credibility with senior leaders, executives, and internal partners. Is proactive, demonstrates a strong work ethic, and shows initiative to build the business in a result driven environment. Utilizes critical thinking skills to help analyze business issues, collaborates with stakeholders to resolve problems, is strategic and flexible when needed, and gains consensus on the best solution. Working Conditions/ Physical Requirements : Normal office environment, some travel may be required. Minimum Qualifications: Bachelor's degree in Engineering, Statistics, Mathematics, Economics or any other related quantitative discipline. Five or more years experience in data analytics or model development/validation is required. Preferred Qualifications : Master's degree in Engineering, Statistics, Mathematics, Economics or any other related quantitative discipline. Six or more years experience in the financial services industry as well as experience in risk management. Other Duties: This job description is illustrative of the types of duties typically performed by this job. It is not intended to be an exhaustive listing of each and every essential function of the job. Because job content may change from time to time, the Company reserves the right to add and/or delete essential functions from this job at any time.

Posted 1 month ago

Apply

10.0 - 20.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Job Title: Senior developer - Artificial Intelligence & Machine Learning Location: Chennai / Bangalore / Pune (Hybrid model - 2 days working from office) Role Summary We are seeking a skilled and hands-on Senior Artificial Intelligence & Machine Learning (AI/ML) Engineer to build and productionize AI solutionsincluding fine-tuning large language models (LLMs), implementing Retrieval-Augmented Generation (RAG) workflows, multi-agent applications and MLOps pipelines. This role focuses on individual technical contribution and requires close collaboration with solution architects, AIML leads, and fellow engineers to translate business use cases into scalable, secure, cloud-native AI services. The ideal candidate will bring deep technical expertise across the AI/ML lifecyclefrom prototyping to deploymentwhile contributing to a culture of engineering excellence through peer reviews, documentation, and platform innovation. They will play a critical role in delivering robust, high-performance AI systems in partnership with the broader AI/ML team. Key Responsibilities Model Development & Optimization Fine-tune foundation models (e.g., GPT-4, Llama 3). Implement prompt engineering and basic parameter-efficient tuning (e.g., LoRA). Conduct model evaluation for quality, bias, and hallucination; analyze results and suggest improvements. RAG & Agentic Systems (Exposure, Not Ownership) Assist in building RAG pipelines: Participate in integrating and embedding generation, vector stores (e.g., FAISS, pgvector), and retrieval/ranking components. Work with multi-agent frameworks (e.g., LangChain, Crew AI) Production Engineering / MLOps Contribute to CI/CD pipelines for model training and deployment (e.g., GitHub Actions, SageMaker Pipelines). Help automate monitoring for latency, drift, and cost; assist in lineage tracking (e.g., MLflow). Containerize services with Docker and assist in orchestration (e.g., Kubernetes/EKS/GKE) Data & Feature Engineering Build and maintain data pipelines for collection, cleansing, and feature generation (e.g., Airflow, Spark). Implement basic data versioning and assist with synthetic data generation as needed Code Quality & Collaboration Participate in design and code reviews. Contribute to testing (unit, integration, guardrail/hallucination tests) and documentation. Share knowledge through sample notebooks and internal sessions Security, Compliance, Performance Follow secure coding and Responsible AI guidelines. Assist in optimizing inference throughput and cost (e.g., quantization, batching) under guidance. Ensure SLAs are met and contribute to system auditability Technology Stack Programming Languages & Frameworks Python (expert) JavaScript/Go/TypeScript (nice-to-have) Strong knowledge of libraries such as Scikit-learn, Pandas, NumPy, XGBoost, LightGBM, TensorFlow, PyTorch. PyTorch, TensorFlow/Keras, Hugging Face Transformers/PEFT, LangChain/LlamaIndex, Ray/PyTorch Lightning, FastAPI/Flask Experience working with RESTful APIs, authentication (OAuth, API keys), and pagination Cloud & DevOps Expertise in one or more cloud vendors like AWS, GCP, Azure Containers (Docker), Orchestration (Kubernetes, EKS/GKE/AKS) MLOps Databases Relational: PostgreSQL, MySQL NoSQL: MongoDB / DynamoDB Vector Stores: FAISS / pgvector / Pinecone / OpenSearch / Milvus / Weaviate RAG Components Document loaders/parsers, text splitters (recursive/semantic), embeddings (OpenAI, Cohere, Vertex AI), hybrid/BM25 retrievers, rerankers (Cross-Encoder) Multi-Agent Frameworks Crew AI / AutoGen / LangGraph / MetaGPT / Haystack Agents, planning & tool-use patterns Testing & Quality Unit/integration testing (pytest), guardrails Qualifications: 1012 years of experience in software engineering/data science, with 4+ years leading AI/ML projects end-to-end. Bachelors or Masters in Computer Science, Artificial Intelligence, Data Science, or related field. Certifications preferred: AWS Certified Machine Learning / Google Professional Machine Learning Engineer / Azure AI Engineer Associate and Kubernetes CKA/CKAD. Experience in regulated industries (Fintech, Healthcare, eCommerce) is a plus. Soft Skills & Leadership Attributes Influential communication: Translate complex ML concepts for non-technical stakeholders; strong presentation & storytelling. Mentorship mindset: Coach, upskill, and inspire cross-functional teams; foster psychological safety. Ownership & bias for action: Deliver POCs, iterate based on feedback, drive solutions to production. Critical thinking & experimentation: Data-driven decision making, hypothesis testing, A/B experimentation. Adaptability: Stay current with rapid advances in GenAI, tooling, and research; evaluate emerging models/services.

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Chennai

Work from Office

Sr. AI/ML Engineer Years of experience: Lead Material - 5-10 years (with minimum 5 years of relevant experience) Key Skills: Python, Tensorflow, Generative AI ,Machine Learning, AWS , Agentic AI, Open AI, Claude, Fast API Job description: Lead the development and implementation of GenAI solutions leveraging LLMs (e.g., GPT, Claude, LLaMA) and multimodal models. Identify and prioritize use cases across functions such as drug discovery, clinical trial design, medical writing, patient engagement, and knowledge management. Guide the evaluation and fine-tuning of foundation models using Takedas proprietary and third-party data, ensuring compliance with GxP, HIPAA, and AI governance. Architect scalable GenAI pipelines using cloud-native tools and frameworks (e.g., Azure OpenAI, AWS Bedrock, LangChain, Hugging Face, NVIDIA NeMo). Collaborate with data scientists, engineers, compliance, legal, and business leaders to define AI standards, ethics, and responsible AI principles. Lead a cross-functional GenAI team, driving innovation while ensuring security, transparency, and performance of deployed models. Stay current on GenAI trends, open-source ecosystems, and regulatory impacts (e.g., EU AI Act, FDA guidelines). Location: World Trade Center 1st floor of Tower B, 5/142, Rajiv Gandhi Salai, Perungudi, Chennai- 600096 Walkin Date: 26th July 2025 Time: 11:00 am to 3:00pm Contact Recruiter: Anishma Alexen

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Noida

Hybrid

Position Title: Specialist- Data Science Business Title: Engineer II- Data Science We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelors degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning, deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Computer vision(yolo ), Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock OR Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain OR LlamaIndex OR RAG ) Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines (ETL Gluejob, Quicksight) for unstructured data including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals ( VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud, Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. ** Title/Designation of role are as per Global Team Job profiling. Local Titles are adjusted for understanding of regional candidates applying for job in India. Designations are dependent on years of relevant work experience and performance during interviews.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Pune

Hybrid

Location : Baner Pune Hybrid Mode - 3 days WFO Payments Domain is Mandatory Immediate to 30days Notice period JD : TP3 Full-Stack Developer Python + Angular 7 Years Experience Key Responsibilities: Full-Stack Application Development: Design, develop, and maintain scalable web applications using Angular (v12+) for frontend and Python-based backends . Ensure responsive, secure, and performant user interfaces. API & Microservices Architecture: Develop RESTful APIs and backend services using Python (FastAPI, Flask, or Django). Structure and manage microservices for modular and maintainable systems. Frontend Development (Angular): Implement dynamic user interfaces with Angular, focusing on component-based architecture, reactive forms, routing, and state management (NgRx or RxJS). Integrate with backend services via REST APIs. Database Integration: Work with NoSQL databases to manage structured and unstructured data efficiently. CI/CD & DevOps: Contribute to CI/CD pipelines and containerized deployments using Docker , GitHub Actions, GitLab CI/CD, or similar tools. Work in agile DevOps environments for smooth releases. Cross-Functional Collaboration: Work closely with UX designers, backend engineers, data teams, and occasionally AI/GenAI developers to deliver integrated and feature-rich applications. Performance Optimization: Optimize frontend rendering and backend response times. Monitor and improve overall application performance. Required Qualifications: Frontend Skills: 35 years experience with Angular (v10 or newer). Experience with reactive forms, and Angular best practices. Backend Skills: 35 years experience with Python (FastAPI, Flask, or Django preferred). Experience in building RESTful APIs, managing authentication/authorization, and handling business logic. Database & Data Handling: Solid experience with PostgreSQL and at least one NoSQL database. Familiarity with schema design, indexing, and performance tuning. DevOps & Tooling: Experience with Docker , Git-based workflows, and setting up CI/CD pipelines. Familiarity with cloud platforms like AWS , GCP , or Azure is a plus. General: 5–7 years of total professional experience in software development. Strong debugging, analytical, and problem-solving skills. Ability to work in Agile environments and collaborate with cross-functional teams. Regards : Soniya S

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies