Jobs
Interviews

207 Vertex Ai Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

20 - 27 Lacs

indore, pune

Work from Office

Design, develop, deploy ML models for real-world applications Implement, optimize ML pipelines using PySpark & MLflow Structured/ unstructured data using Pandas, NumPy Train models using scikit-learn, TensorFlow, PyTorch. Integrate LLMs Required Candidate profile Strong in Python, ML algorithms, model development, evaluation techniques Exp in PySpark model lifecycle management, MLflow Pandas, NumPy, and Scikit-learn, LLMs (OpenAI, Hugging Face Transformers)

Posted 3 weeks ago

Apply

11.0 - 14.0 years

35 - 45 Lacs

bengaluru

Hybrid

Role & rJob Title: Senior Machine Learning Engineer Anomaly Detection & Time Series Forecasting Location: Bangalore Department: Engineering – AI/ML Job Overview: Resolve Tech Solutions is seeking an experienced Senior Machine Learning Engineer to drive the development of AI-driven anomaly detection, time series forecasting, and predictive analytics models for our next-generation observability platform. This role will focus on designing, building, and deploying ML models that provide real-time insights, predictive alerts, and intelligent recommendations powered by LLMs. As a key individual contributor, you will work on solving complex challenges in large-scale cloud environments while collaborating with cross-functional teams. Key Responsibilities: Develop Advanced ML Models: Design and implement machine learning models for anomaly detection, time series forecasting, and predictive analytics, ensuring high accuracy and scalability. Anomaly Detection & Root Cause Analysis: Build robust models to detect abnormal patterns in metric data, leveraging statistical methods, deep learning, and AI-driven techniques. Time Series Forecasting: Implement predictive models to forecast metric trends, proactively identifying threshold breaches and alerting users. LLM-Driven Insights: Utilize Large Language Models (LLMs) to analyze historical incidents, correlate anomalies, and provide recommendations by integrating with ITSM platforms like ServiceNow. Cloud & Big Data Integration: Work with large-scale data pipelines, integrating ML models with cloud platforms such as AWS, Azure, and GCP. Feature Engineering & Data Processing: Design and optimize feature extraction and data preprocessing pipelines for real-time and batch processing. Model Deployment & Optimization: Deploy ML models in production environments using MLOps best practices, ensuring efficiency, scalability, and reliability. Performance Monitoring & Continuous Improvement: Establish key performance metrics, monitor model drift, and implement retraining mechanisms for continuous model improvement. Collaboration & Knowledge Sharing: Work closely with product managers, data engineers, and DevOps teams to align ML solutions with business objectives and platform goals. Requirements: Experience: 5+ years of hands-on experience in machine learning, with a strong focus on anomaly detection, time series forecasting, and deep learning. ML & AI Expertise: Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-Learn, and XGBoost. Anomaly Detection: Experience with statistical techniques, autoencoders, GANs, or isolation forests for anomaly detection in time series data. Time Series Forecasting: Strong background in models such as ARIMA, Prophet, LSTMs, or Transformers for predictive analytics. LLMs & NLP: Hands-on experience in fine-tuning and integrating LLMs for intelligent insights and automated issue resolution. Cloud & Data Engineering: Familiarity with cloud ML services (AWS SageMaker, Azure ML, GCP Vertex AI) and distributed computing frameworks like Spark. MLOps & Deployment: Experience with CI/CD pipelines, Docker, Kubernetes, and model monitoring in production. Problem-Solving & Analytical Skills: Ability to analyze large datasets, derive insights, and build scalable ML solutions for enterprise applications. Communication & Collaboration: Strong verbal and written communication skills, with the ability to explain ML concepts to non-technical stakeholders. Education: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. Preferred Qualifications: Experience with AIOps, observability, or IT operations analytics. Hands-on experience with reinforcement learning or graph neural networks. Familiarity with Apache Kafka, Flink, or other real-time data processing frameworks. Contributions to open-source ML projects or research publications in anomaly detection and time series analysis. Why Join Us? Opportunity to work on cutting-edge AI/ML solutions for enterprise observability. Collaborative and innovative work environment with top AI/ML talent. Competitive salary, benefits, and career growth opportunities. Exposure to large-scale, cloud-native, and AI-driven technologies. If you are passionate about AI-driven anomaly detection, time series forecasting, and leveraging LLMs for real-world enterprise solutions, we'd love to hear from you!esponsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

2.0 - 5.0 years

3 - 7 Lacs

chennai

Work from Office

MaintWiz is building an AI-first CMMS, leveraging our rich maintenance, equipment, and operational datasets to deliver predictive, prescriptive, and automated intelligence. As a Machine Learning Engineer focused on CMMS data, you will design, build, and deploy ML models that directly improve asset reliability, reduce downtime, optimize spare parts inventory, and enhance maintenance decision-making. Technical Skills 1) Handson experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn). 2) Experience with time-series forecasting, anomaly detection, and NLP. 3) Proficiency in Python and SQL; familiarity with Spark or distributed data processing. 4) Knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, or similar). 5) Deep Domain Knowledge 6) Experience working with asset management, manufacturing, IoT, or CMMS-related datasets is highly preferred. 7) Understanding of maintenance KPIs, asset hierarchies, and work order processes. 8) Experience deploying ML models in CMMS / EAM / Manufacturing data. 9) Exposure to IoT device data, sensor fusion, and predictive maintenance platforms. Degree in Computer Science Engineering or relevant discipline. MBA degree is an advantage 2-5 years of experience in building ML models related to Plant Maintenance, Asset Management, O&M, Reliability, Condition Monitoring.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

50 - 85 Lacs

noida

Work from Office

As a Senior Data Scientist AI, you will lead the design and development of machine learning and generative AI models that enhance our platform. You will work closely with engineers, product managers, and cybersecurity SMEs to innovate how we analyze architectures, assess risks, and automate threat intelligence. This is a hands-on, highly strategic role that requires both technical depth and product insight. Key Responsibilities Design and implement ML and NLP models for use in analysis, pattern recognition, and AI-powered automation. Build and evaluate generative AI models (e.g., LLMs or RAG pipelines) to automate threat model generation and risk recommendations. Collaborate with product and engineering to align AI solutions with customer pain points and business goals. Conduct data discovery, wrangle large security and architecture datasets, and ensure data quality for training and inference. Lead experiments, performance tuning, and model deployment using MLOps best practices. Develop reusable components and contribute to internal AI frameworks and libraries. Interpret and communicate complex data science results to non-technical stakeholders. Mentor junior data scientists and help define the future AI roadmap at ThreatModeler. Required Qualifications 5+ years of industry experience in data science, machine learning, or AI development, preferably in enterprise software or cybersecurity. Advanced degree (MS or PhD) in Computer Science, Data Science, Statistics, or a related field. Strong experience in NLP, LLMs (e.g., OpenAI, Cohere, Anthropic), and transformer architectures. Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face). Deep understanding of model evaluation, bias/fairness, data preprocessing, and statistical analysis. Familiarity with cloud environments (AWS/GCP/Azure) and containerized model deployment (Docker, Kubernetes). Experience working with graph-based models, threat intelligence, or security ontologies is a strong plus. Preferred Qualifications Experience with MLOps tools (MLflow, Vertex AI, SageMaker). Understanding of threat modeling methodologies Background in building SaaS AI solutions for large-scale enterprise clients.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

16 - 27 Lacs

bengaluru

Work from Office

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 5+ years. Strong proficiency in Python for Data Science , with experience in libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Hands on working experience in Python. Experience with data visualization libraries (e.g., matplotlib, seaborn, plotly) Hands on working experience in AWS Sagemaker. Basic to intermediate SQL skills Experience working with GCP. Hands-on experience deploying ML solutions using Kubeflow, Vertex AI, Airflow, or PySpark Excellent communication and collaboration skills for working across global teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Requirements Key Responsibilities Governance and harmonisation Define and establish best lean agile practices driving value addition , team efficiency, collaboration and visibility across teams and product Establish common tools and usage to drive the software and data release process across multiple teams from backlog management to value based prioritization to sprint execution to deployments to release closure and documentation generation. Drive scrum of scrum for overall SaaS and data platform. Agile Delivery Leadership Champion Scrum, Kanban, and SAFe principles while adapting them to suit product maturity and regulatory requirements. Facilitate sprint planning, backlog refinement, daily stand-ups, sprint reviews, and retrospectives with technical and business stakeholders. Drive incremental delivery and ensure teams deliver working, high-quality software on predictable timelines. Remove delivery blockers proactively, from technical dependencies to cross-team alignment issues. Technical Engagement Understand system architecture, AI model lifecycle, MLOps pipelines, and data flows to better facilitate technical discussions. Collaborate with Tech Leads, Data Scientists, ML Engineers, and DevOps teams to ensure sprint commitments are technically achievable. Track code quality, automated test coverage, CI/CD health, and cloud infrastructure readiness as part of delivery metrics. Ensure data privacy, security, and regulatory compliance are integrated into delivery workflows. Technical Stakeholder & Product Alignment Partner with Product Owners, Clinical SMEs, Data Science, Data Engineering and Customer Success to ensure backlog priorities align with business outcomes and value driven Balance innovation speed with healthcare/pharma compliance requirements (HIPAA, GDPR, FDA, GxP). Ensure transparent and regular communication of progress, risks, and dependencies to leadership and stakeholders. Metrics and Continuous Improvement Establish and monitor KPIs such as sprint predictability, sprint metrics such as burn rate, lead time, defect leakage, and deployment frequency. Drive retrospective outcomes into actionable improvements for team efficiency and product quality. Introduce process automation, backlog grooming discipline, and release readiness checklists to optimize delivery Work Experience Required Qualifications 8+ years in software delivery roles, with 5+ years as a Scrum Master or Agile Delivery Lead. Extensive admin experience in setting up and using JIRA tool to drive the software and data release process across multiple teams from backlog management to value based prioritization to sprint execution to deployments to release closure and documentation generation Built or scaled scrum practices from grounds up with successful adoption across multiple product/scrum teams Proven track record in SaaS product delivery, preferably with AI/ML-powered platforms. Strong technical foundation in cloud-native architectures (AWS/Azure/GCP), APIs, microservices, and data engineering workflows. Excellent servant-leadership, facilitation, and conflict resolution skills Proactive thinker with ability to influence stakeholder based on business objectives and value delivery Strong analytical ability to interpret technical and business metrics for decision-making. Familiarity with generative AI, NLP, and predictive analytics in healthcare contexts. Familiarity with ML model development, MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI), and data governance Experience in healthcare and/or pharma software, with strong understanding of EHR/EMR health records and HIPAA, GDPR, GxP, and FDA 21 CFR Part 11 compliance. Understanding of clinical trial systems and/or RWE (Real World Evidence) platforms, or drug discovery pipelines. Show more Show less

Posted 4 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

bengaluru

Work from Office

About this role: Wells Fargo is seeking a Lead Software Engineer within the Enterprise Application & Cloud Transformation team. In this role, you will: Lead complex technology Cloud initiatives including those that are companywide with broad impact. Act as a key contributor in automating the provisioning of Cloud Infrastructure using Infrastructure as a Code. Make decisions in developing standards and companywide best practices for engineering and large-scale technology solutions. Design, Optimization and Documentation of the Engineering aspects of the Cloud platform. Understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives. Review and analyze complex, large-scale technology solutions in Cloud for strategic business objectives and solving technical challenges that require in-depth evaluation of multiple parameters, including intangibles or unprecedented technical factors. Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals. Build and Enable cloud infrastructure, automate the orchestration of the entire GCP Cloud Platforms for Wells Fargo Enterprise. Working in a globally distributed team to provide innovative and robust Cloud centric solutions. Closely working with Product Team and Vendors to develop and deploy Cloud services to meet customer expectations. Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years working with GCP and a proven track record of building complex infrastructure programmatically with IaC tools. Must have 2+ years of hands-on experience with Infrastructure as Code tool Terraform and GitHub. Must have professional cloud certification on GCP. Proficient on container-based solution services, have handled at least 2-3 large scale Kubernetes based Infrastructure build out, provisioning of services GCP. Exposure to services like GKE, Cloud functions, Cloud Run, Cloud Build, Artifactory etc. Infrastructure and automation technologies: Orchestration, Harness, Terraform, Service Mesh, Kubernetes, API development, Test Driven Development Sound knowledge on the following areas with an expertise on one of them - 1. Proficient and have a thorough understanding of Cloud service offerings on Storage and Database. 2. Should have good understanding of networking, firewalls, load balancing concepts (IP, DNS, Guardrails, Vnets) and exposure to cloud security, AD, authentication methods, RBAC. 3. Proficient and have a thorough understanding of Cloud service offerings on Data, Analytics, AI/ML. Exposure to Analytics AIML services like BigQuery, Vertex AI, Data Proc etc. 4. Proficient and have a thorough understanding of Cloud service offerings on Security, Data Protection and Security policy implementations. Thorough understanding of landing zone and networking, Security best practices, Monitoring and logging, Risk and controls. Should have good understanding on Control Plane, Azure Arc and Google Anthos. Experience working in Agile environment and product backlog grooming against ongoing engineering work Enterprise Change Management and change control, experience working within procedural and process driven environment Desired Qualifications: Should have exposure to Cloud governance and logging/monitoring tools. Experience with Agile, CI/CD, DevOps concepts and SRE principles. Experience in scripting (Shell, Python, Go) Excellent verbal, written, and interpersonal communication skills. Ability to articulate technical solutions to both technical and business audiences Ability to deliver & engage with partners effectively in a multi-cultural environment by demonstrating co-ownership & accountability in a matrix structure. Delivery focus and willingness to work in a fast-paced, enterprise environment.

Posted 4 weeks ago

Apply

7.0 - 10.0 years

22 - 37 Lacs

bengaluru

Work from Office

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 7+ years. Hands on working experience in Python Experience with data visualization libraries (e.g., matplotlib, seaborn, plotly) Strong working experience grasp of DS stack packages: SciPy, scikit-learn, TensorFlow, PyTorch, NumPy, Pandas. Hands on working experience in AWS Sagemaker. Basic to intermediate SQL skills Experience working with GCP. Hands-on experience deploying ML solutions using Kubeflow, Vertex AI, Airflow, or PySpark Excellent communication and collaboration skills for working across global teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

bengaluru

Work from Office

- Hands on working experience in Python. - Experience with data visualization libraries. - Strong working experience grasp of DS stack packages. - Basic to intermediate SQL. - Experience working with GCP. - Hands-on experience deploying ML solutions.

Posted 4 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As an AI Infrastructure Engineer (DevOps/MLOps) at Techvantage.ai, you will play a crucial role in building and managing the cloud infrastructure, deployment pipelines, and machine learning operations that power our AI-driven products. Your responsibilities will span across software engineering, machine learning, and cloud architecture, ensuring that our systems are scalable, reliable, and ready for production. You will design and manage CI/CD pipelines for software applications and machine learning workflows, deploying and monitoring ML models in production using tools like MLflow, SageMaker, or Vertex AI. Automation of infrastructure provisioning and configuration through IaC tools such as Terraform or Pulumi will be a key aspect of your role. Building robust monitoring, logging, and alerting systems for AI applications, alongside managing containerized services with Docker and Kubernetes, will be essential for maintaining system reliability and scalability. Collaboration with data scientists and ML engineers to streamline model experimentation, versioning, and deployment will be a part of your daily tasks. Optimizing compute resources and storage costs across cloud environments like AWS, GCP, or Azure, and ensuring system security will also fall under your purview. To excel in this role, you should have at least 5 years of experience in DevOps, MLOps, or infrastructure engineering roles, with hands-on experience in cloud platforms and services related to ML workloads. Proficiency in CI/CD tools, Docker, Kubernetes, and infrastructure-as-code frameworks is essential. Scripting skills in Python, Bash, or similar languages for automation tasks, along with familiarity with monitoring/logging tools, are highly valued. Preferred qualifications include experience with Kubeflow, MLflow, DVC, or Triton Inference Server, as well as exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus, and a background in software engineering, data engineering, or ML research is considered a bonus. By joining Techvantage.ai, you will have the opportunity to work on cutting-edge AI platforms and infrastructure, collaborate with top ML, research, and product teams, and receive a competitive compensation package with no constraints for the right candidate.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an AI developer specializing in LLMs, multi-ai-agentic systems, and AI reasoning frameworks, you will be expected to have hands-on experience with Langchain, LangGraph, and CrewAI. You should possess strong programming skills in Python for AI development and be familiar with ReAct prompting, Chain-of-Thought reasoning, and AI fine-tuning. Experience with tools such as Google Cloud, Vertex AI, and CI/CD automation is required to excel in this role. Your responsibilities will include evaluating AI model performance, scalability, and efficiency, as well as having knowledge of multi-agent coordination, reinforcement learning, and distributed AI. You should be able to assess models and write evaluation code, demonstrating your strong problem-solving skills for AI system optimization and troubleshooting. This role as an AI developer requires a BTECH degree and proficiency in key skills such as GENAI, GENAI DEVELOPER, and PYTHON. If you are passionate about pushing the boundaries of AI technology and have a keen interest in developing innovative solutions, this full-time, permanent position as an AI developer at our organization could be the perfect fit for you. Recruiter Name: Kathiravan G Job Code: GO/JC/807/2025,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As an AI Senior Developer, you will be responsible for implementing AI solutions using Vertex AI or Google Cloud AI Platform to develop accelerators, IP solutions, or recommendation engines. You will conduct security assessments of AI-powered solutions in alignment with policies and guidelines, and collaborate with stakeholders to seamlessly integrate AI into existing workflows. Your expertise in machine learning frameworks such as TensorFlow and PyTorch, coupled with proficiency in cloud-based AI platforms, will be crucial for the successful execution of projects. Moreover, you should be familiar with building solutions using Large Language Models (LLMs) specifically with Vertex AI, and possess knowledge of AI ethics and responsible AI practices to ensure unbiased model outcomes. Your experience with cloud-native analytics platforms like Google BigQuery will be advantageous, along with strong programming skills in Python for AI, data processing tasks, and UI development. Additionally, your problem-solving capabilities and adeptness in troubleshooting complex technical issues will be essential in this role. Furthermore, effective written and oral communication skills are required for seamless collaboration with team members and stakeholders. You should have an advanced understanding of data engineering principles and experience in creating data pipelines. Proficiency in using version control systems like Git for collaborative code development is crucial, as well as familiarity with Agile methodologies and working in Agile teams. Your ability to perform exploratory data analysis to derive insights and inform model development will be key to the success of AI projects. Experience in integrating and working alongside DevOps teams to ensure smooth workflow transitions is also desirable.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should be an expert in LLMs, multi-ai-agentic systems, and AI reasoning frameworks with hands-on experience in Langchain, LangGraph, and CrewAI. Strong programming skills in Python for AI development are essential, along with familiarity in ReAct prompting, Chain-of-Thought reasoning, and AI fine-tuning. Experience with Google Cloud, Vertex AI, and CI/CD automation is required to excel in this role. You must possess the ability to evaluate AI model performance, scalability, and efficiency. Knowledge of multi-agent coordination, reinforcement learning, and distributed AI will be beneficial. Your role will involve evaluating models, writing evaluation code, and applying strong problem-solving skills for AI system optimization and troubleshooting. As a RoleGen AI developer, you will work in the IT/Computers - Software industry. A BTECH degree is required for this full-time, permanent position. Key skills include expertise in GENAI, GENAI Developer, and Python programming. If you are passionate about AI development and possess the necessary skills and qualifications, this role offers an exciting opportunity to work with cutting-edge technologies and contribute to innovative projects. Recruiter Name: Kathiravan G,

Posted 1 month ago

Apply

7.0 - 23.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Generative AI Lead, you will be responsible for spearheading the design, development, and implementation of cutting-edge GenAI solutions within enterprise-grade applications. Your role will encompass leveraging your expertise in Large Language Models (LLMs), prompt engineering, and scalable AI system architecture, coupled with hands-on experience in MLOps, cloud technologies, and data engineering. Your primary responsibilities will include designing and deploying scalable and secure GenAI solutions utilizing LLMs such as GPT, Claude, LLaMA, or Mistral. You will lead the architecture of Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Weaviate, FAISS, or ElasticSearch. Additionally, you will be involved in prompt engineering, evaluation frameworks, and collaborating with cross-functional teams to integrate GenAI into existing workflows and applications. Moreover, you will develop reusable GenAI modules for various functions like summarization, Q&A bots, and document chat, while leveraging cloud-native platforms such as AWS Bedrock, Azure OpenAI, and Vertex AI for deployment and optimization. You will ensure robust monitoring and observability across GenAI deployments and apply MLOps practices for CI/CD, model versioning, validation, and research into emerging GenAI trends. To be successful in this role, you must possess at least 8 years of overall AI/ML experience, with a focus of at least 3 years on LLMs/GenAI. Strong programming skills in Python and proficiency in cloud platforms like AWS, Azure, and GCP are essential. You should also have experience in designing and deploying RAG pipelines, summarization engines, and chat-based applications, along with familiarity with MLOps tools and evaluation metrics for GenAI systems. Preferred qualifications include experience with fine-tuning open-source LLMs, knowledge of multi-modal AI, familiarity with domain-specific LLMs, and a track record of published work or contributions in the GenAI field. In summary, as a Generative AI Lead, you will play a pivotal role in driving innovation and excellence in the development and deployment of advanced GenAI solutions, making a significant impact on enterprise applications and workflows.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You are an experienced professional with 7+ years of total experience, seeking an opportunity to work with Nagarro, a Digital Product Engineering company. You have hands-on working experience in Python and are familiar with data visualization libraries such as matplotlib, seaborn, and plotly. Your strong grasp of DS stack packages including SciPy, scikit-learn, TensorFlow, PyTorch, NumPy, and Pandas is commendable. Additionally, you possess basic to intermediate SQL skills and have experience working with GCP. In this role, you will be responsible for writing and reviewing high-quality code. You will be required to understand the clients" business use cases and technical requirements and translate them into technical designs that elegantly meet the specified requirements. Your ability to identify different solutions and select the best option that aligns with the clients" needs will be crucial. Furthermore, you will define guidelines and benchmarks for NFR considerations during project implementation. As part of your responsibilities, you will develop and design overall solutions for defined functional and non-functional requirements. You will be expected to review architecture and design aspects such as extensibility, scalability, security, design patterns, and user experience, ensuring that relevant best practices are followed. Additionally, you will resolve issues raised during code reviews through systematic analysis of the root cause and carry out POCs to validate suggested designs and technologies. To excel in this role, you must have excellent communication and collaboration skills to work effectively across global teams. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is required. If you are passionate about building products, services, and experiences that inspire and excite, then this opportunity at Nagarro is the perfect fit for you. Join us in our dynamic and non-hierarchical work culture, and be part of our global team of experts across 39 countries.,

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer specializing in AI/ML development and leading Vertex AI & Gemini projects, you will play a crucial role in developing and deploying solutions using cutting-edge technologies. With 5-8 years of experience in Software Development/Engineering, you will be responsible for integrating GenAI components into enterprise-grade document automation workflows. Your expertise in Google Cloud Platform, Vertex AI, and Gemini models will be essential in contributing to scalable, cloud-native architectures for document ingestion, extraction, summarization, and transformation. Your future duties and responsibilities include developing and deploying solutions using Vertex AI, Gemini models, Document AI, and custom NLP/OCR components. You will also collaborate with architects, MLOps engineers, and business stakeholders to translate requirements into scalable code while ensuring secure and compliant handling of sensitive data in document processing workflows. Staying up to date with the latest Gemini/LLM advancements and integrating relevant innovations into projects will be a key aspect of your role. In order to be successful in this role, you must possess expertise in various skills including Google Cloud Platform (Vertex AI, Cloud Functions, Cloud Run, BigQuery, Document AI, Firestore), GenAI/LLMs (Google Gemini, PaLM, LangChain), OCR & NLP tools (Tesseract, GCP Document AI, spaCy, Hugging Face Transformers), Full Stack technologies (React or Next.js, Node.js or FastAPI, Firebase/Firestore), DevOps/MLOps practices (GitHub Actions, Vertex Pipelines, Docker, Terraform), and Data & Integration tools (REST APIs, GraphQL, Webhooks, Cloud Pub/Sub, JSON/Protobuf). With a solid background in full-stack development, hands-on experience in building products leveraging GenAI, NLP, and OCR, as well as proficiency in Kubernetes concepts, relational and non-relational databases, you will be well-equipped to tackle complex issues and adapt to rapidly evolving AI technologies. Your understanding of privacy regulations, security best practices, and ethical considerations in AI development will be crucial in developing production-ready systems. Additionally, experience working with Google Gemini models, document parsing, NLP, OCR, and GenAI-based transformation will further enhance your capabilities in this role. As an integral part of the team at CGI, you will have the opportunity to turn meaningful insights into action, shaping your career in a company focused on growth and innovation. With a startup mentality and a strong sense of ownership, you will contribute to delivering innovative solutions and building valuable relationships with teammates and clients, ultimately driving success in the world of IT and business consulting services.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Test Engineer at Google, you will play a crucial role in ensuring the quality of Google's suite of products and services. Your responsibilities will involve coding, developing test methodologies, writing test plans, creating test cases, and debugging. You will focus on automation testing and work with Android devices or Wear OS. You should hold a Bachelor's degree or equivalent practical experience and have at least 5 years of relevant experience. Preferred qualifications include a Master's degree in Computer Science, Electrical Engineering, or a related field. Experience in building web-based solutions for test processes, fine-tuning AI libraries, and familiarity with Agile methodologies, CI, and release management are desirable. At Google, Test Engineers are not manual testers but instead, they write scripts to automate testing and develop tools for developers to test their code. You will analyze Google's extensive codebase, identify areas for improvement, and devise innovative ways to uncover software vulnerabilities. Your work will significantly impact the quality of Google's products. In the Pixel Wearables team, you will collaborate with international teams to ensure the quality of Pixel Wearable products through integrated software and hardware testing. You will lead wearables testing and performance strategies, leverage AI technologies for testing, build internal tools for testing efforts, and research new testing tools to enhance capabilities. Google's mission is to organize the world's information and make it universally accessible and useful. The Devices & Services team combines AI, software, and hardware to create user-friendly experiences. Your role will involve researching, designing, and developing new technologies to improve user interactions with computing devices. Key Responsibilities: - Own and drive wearables testing and performance strategies - Utilize AI technologies to enhance test coverage, effectiveness, and reporting - Develop internal tools and dashboards for testing efforts - Research and assess new testing tools and technologies - Identify and resolve technical challenges in the testing process,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data-driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling, and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands includes ACV Auctions, ACV Transportation, ClearCar, MAX Digital, and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India, is looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps, and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV, we focus on the Health, Physical, Financial, Social, and Emotional Wellness of our Teammates and to support this we offer industry-leading benefits and wellness programs. ACV's Machine Learning (ML) team is looking to grow its MLOps team. Multiple ACV operations and product teams rely on the ML team's solutions. Current deployments drive opportunities in the marketplace, in operations, and sales, to name a few. As ACV has experienced hyper growth over the past few years, the volume, variety, and velocity of these deployments have grown considerably. Thus, the training, deployment, and monitoring needs of the ML team have grown as we've gained traction. MLOps is a critical function to help ourselves continue to deliver value to our partners and our customers. Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of their stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The core responsibilities of this role are: - Working with fellow machine learning engineers to build, automate, deploy, and monitor ML applications. - Developing data pipelines that feed ML models. - Deploy new ML models into production. - Building REST APIs to serve ML models predictions. - Monitoring performance of models in production. Required Qualifications: - Graduate education (MS or PhD) in a computationally intensive domain or equivalent work experience. - 3+ years of prior relevant work or lab experience in ML projects/research. - Advanced proficiency with Python, SQL, etc. - Experience with building and deploying REST APIs (Flask, FastAPI). - Experience with cloud services (AWS / GCP) and Kubernetes, Docker, CI/CD. Preferred Qualifications: - Experience with MLOps-specific tooling like Vertex AI, Ray, Feast, Kubeflow, or ClearML, etc. are a plus. - Experience with distributed caching technologies (Redis). - Experience with real-time data streaming and processing (Kafka). - Experience with building data pipelines. - Experience with training ML models. Our Values: Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As Ford Motor Company embarks on a significant multi-year Platform Lifecycle Management (PLM) program to modernize critical IT applications across the enterprise, you have a unique opportunity to join the team as a GenAI Technical Manager (LL6). In this role, you will play a pivotal part in driving the practical adoption of Generative AI (GenAI) at Ford, with a focus on creating accelerators for the PLM modernization effort and enhancing the Ford Developer Experience (DX). Your expertise will be crucial in leading the technical development and implementation of GenAI solutions within this strategic program. You will collaborate closely with various teams including PLM program leaders, GDIA (Global Data Insight & Analytics), architecture teams, PDOs (Product Driven Organizations), and engineering teams to design, build, and deploy cutting-edge GenAI tools and platforms. Your responsibilities will include leading the technical design, development, testing, and deployment of GenAI solutions, translating GenAI strategy into actionable projects, managing the technical lifecycle of GenAI tools, and overseeing the integration of GenAI capabilities into existing workflows and processes. As a technical expert in GenAI models and frameworks, you will provide guidance to development teams, architects, and stakeholders on best practices, architecture patterns, security considerations, and ethical AI principles. You will stay updated on the evolving GenAI landscape, evaluate new tools and models, and lead the development of GenAI-powered accelerators and tools to automate and streamline processes within the PLM program. Collaboration and stakeholder management will be key aspects of your role, requiring effective communication of complex technical concepts to diverse audiences. You will also lead proof-of-concept projects with emerging GenAI technologies, champion experimentation and adoption of successful tools and practices, and mentor junior team members in GenAI development tasks. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Software Engineering, Artificial Intelligence, or a related field, along with 8-10+ years of experience in software development/engineering with a focus on AI/ML and Generative AI solutions. Deep practical expertise in GenAI, strong software development foundation, and familiarity with enterprise application context are essential qualifications. Preferred qualifications include GCP certifications, experience with Agile methodologies, and familiarity with PLM concepts or the automotive industry. If you are passionate about innovation in the AI space, possess strong analytical and strategic thinking skills, and excel in a fast-paced, global environment, we invite you to join us in shaping the future of AI at Ford Motor Company.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You are a motivated and technically skilled Data Scientist with 4+ years of experience in applying AI/ML techniques to solve real-world problems. You possess a solid foundation in machine learning, statistical modeling, and data analysis, with hands-on experience working with large datasets and deploying models in production environments. Your key responsibilities include designing, developing, and deploying machine learning models and AI solutions for business problems across multiple domains. You will collaborate with cross-functional teams including engineering, product, and analytics to identify opportunities for data-driven solutions. Conducting exploratory data analysis (EDA), feature engineering, and model selection using industry best practices are essential. You will also perform experiments, model validation, and performance tuning, as well as build reusable code and libraries for future use while contributing to data science platform improvements. Presenting findings and insights clearly to both technical and non-technical stakeholders is a crucial part of your role. You should have 4+ years of relevant experience in data science, with hands-on experience in machine learning, deep learning, or AI systems. Proficiency in Python and relevant libraries such as Pandas, Scikit-learn, TensorFlow, PyTorch, NumPy, etc., is required. A strong understanding of supervised, unsupervised, and reinforcement learning techniques is necessary. Experience with SQL, data wrangling, and working with large datasets is expected, and exposure to cloud platforms like AWS, Azure, or GCP is a plus. Familiarity with MLOps, model deployment, and version control (e.g., Git) is beneficial. Preferred qualifications include experience working with NLP, computer vision, or time-series forecasting, as well as familiarity with data visualization tools such as Matplotlib, Seaborn, Power BI, Tableau. An understanding of A/B testing, causal inference, and statistical hypothesis testing is advantageous. You should be able to work in a fast-paced, agile environment with minimal supervision and have experience in GenAI frameworks (OpenAI, Vertex AI), multi-agent architectures, and MLOps tooling. In terms of the tech stack, you should be familiar with languages and frameworks like Python, Java, SQL, JavaScript/TypeScript, and tools such as TensorFlow, PyTorch, and Scikit-learn. Cloud platforms like GCP Vertex AI, BigQuery, Cloud Functions, Cloud Run, Dataflow, and Pub/Sub are essential. Knowledge of data and MLOps tools like BigQuery, AlloyDB, MongoDB, Redis, relation databases (e.g., Oracle, MySQL, MSSQL), and MLOps tools (e.g., MLflow, Kubeflow, Airflow, Docker, Kubernetes) is required. Understanding of GenAI and agents such as Vertex AI Gemini, OpenAI, Claude, LangChain, LlamaIndex, FAISS, Pinecone, RAG pipelines, autonomous agents, multi-agent orchestration, memory modules, tool use, and prompt chaining is beneficial. Dev tools like GitHub Copilot, ChatGPT, Cursor AI, or any other Gen AI tools are also part of the tech stack you will work with.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a skilled and passionate Platform Architect specializing in GenAI/LLM Systems, you will be responsible for architecting scalable, cloud-native infrastructure to support enterprise-grade GenAI and LLM-powered applications. Your role will involve designing and deploying secure, reliable API gateways, orchestration layers (such as Airflow, Kubeflow), and CI/CD workflows for ML and LLM pipelines. Collaboration with data and ML engineering teams will be essential to enable low-latency LLM inference and vector-based search platforms across GCP (or multi-cloud). You will define and implement a semantic layer and data abstraction strategy to facilitate consistent and governed consumption of data across LLM and analytics use cases. Additionally, implementing robust data governance frameworks including role-based access control (RBAC), data lineage, cataloging, observability, and metadata management will be part of your responsibilities. Your role will involve guiding architectural decisions around embedding stores, vector databases, LLM tooling, and prompt orchestration (e.g., LangChain, LlamaIndex), as well as establishing compliance and security standards to meet enterprise SLA, privacy, and auditability requirements. To excel in this role, you should have at least 7 years of experience as a Platform/Cloud/Data Architect, ideally within GenAI, Data Platforms, or LLM systems. Strong cloud infrastructure experience on GCP (preferred), AWS, or Azure, including Kubernetes, Docker, Terraform/IaC is required. Demonstrated experience in building and scaling LLM-powered architectures using OpenAI, Vertex AI, LangChain, LlamaIndex, etc. will be a significant advantage. You should also have familiarity with semantic layers, data catalogs, lineage tracking, and governed data delivery across APIs and ML pipelines. A track record of deploying production-grade GenAI/LLM services that meet performance, compliance, and enterprise integration requirements is essential. Strong communication and cross-functional leadership skills are also desired, as you will be required to translate business needs into scalable architecture.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for the role of a GCP Vertex AI Engineer with 5 to 12 years of hands-on experience in Generative AI, AI, ML, and Data Science technologies. Your main tasks will involve implementing end-to-end projects using Generative AI, building production-grade solutions, and developing Generative AI applications using various methodologies like RAG prompt engineering and fine-tuning. You will also work with structured and unstructured data to build RAG-based solutions. In this role, you will need to have extensive experience in working with cloud AI services such as GCP Vertex AI, Azure AI, AWS Bedrock, and managing large language models. While experience with GCP is preferred, experience with other cloud platforms will also be considered. Additionally, you will work with AI/ML search and data services within the GCP ecosystem, including GCP Vertex AI, Vertex AI Vector Search, Gemini, Cloud Run, and Cloud SQL. Familiarity with agent frameworks like Google ADK, Langgraph, and CrewAI is also required. Proficiency in Python programming languages and frameworks like Flask and FastAPI is essential for this role. Experience with AI/ML, deep learning, TensorFlow, Python, and NLP will be beneficial. A solid understanding of Generative AI concepts, methodologies, and techniques is highly desirable. Mandatory skills for this position include GCP, Vertex AI, Data Sciences, Python, and Cloud technologies.,

Posted 1 month ago

Apply

12.0 - 18.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Freshworks Organizations everywhere struggle under the crushing costs and complexities of solutions that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. Theres another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description Function: Engineering AI Reports To: VP - Engineering AI Team Size: 20 to 40 (Data Scientists, ML Engineers, Software Engineers) We are looking for a Senior Director Engineering AI to lead the charter for machine learning, and GenAI initiatives across Freshworks platform. This leader will be responsible for defining the AI vision, leading high-impact cross-functional programs, embedding intelligence into our product suite, and scaling a world-class data science function globally. Youll operate at the intersection of business, product, and technology steering the strategic use of AI to solve customer problems, improve operational efficiency, and drive revenue growth. Key Responsibilities Strategy & Vision Define and own the AI roadmap aligned with company objectives. Drive the long-term strategy for AI/ML initiatives, and data monetization opportunities. Evangelize a culture of experimentation, evidence-based decision-making, and responsible AI. Team Leadership Hire, mentor, and develop a world-class team of data scientists, machine learning and engineers. Foster a collaborative, inclusive, and high-impact environment with a strong learning and delivery mindset. Cross-functional Leadership Partner closely with Engineering and Product teams to embed ML models into products and services. Collaborate with stakeholders across business functions to identify and prioritize use cases for AI. Technical Execution Oversee development and deployment of scalable ML models, statistical models, NLP solutions, and recommendation engines. Ensure rigorous experimentation and model validation using state-of-the-art techniques. Champion data governance, quality, and security practices. Metrics & Impact Define KPIs and success metrics for data science initiatives. Deliver measurable impact on revenue growth, operational efficiency, and customer experience. Qualifications Professional Experience 1218 years of experience in AI, with at least 5 years in senior leadership roles managing large data science teams. Proven experience delivering ML-based products and solutions in a SaaS or digital platform environment. Demonstrated ability to influence product roadmaps and drive AI strategy in large-scale environments. Technical Expertise Deep understanding of applied machine learning, NLP, deep learning, causal inference, optimization, and GenAI (LLMs, embeddings, retrieval pipelines). Strong hands-on foundation in Python, SQL, Spark, and ML frameworks such as PyTorch, TensorFlow, scikit-learn. Familiarity with modern cloud data stacks (e.g., Snowflake, Databricks, AWS Sagemaker, Vertex AI, LangChain/RAG pipelines). Drive operational efficiency and engineering productivity across AI and data platform teams through streamlined processes, tooling, and automation. Establish and enforce standardized practices for data engineering, model development, and deployment across teams to ensure consistency, quality, and reuse. Champion platform-first thinkingbuilding reusable components, shared services, and self-service capabilities to accelerate experimentation and delivery. Experience with production-grade ML deployment, experimentation, and performance tracking. Leadership & Influence Strong executive presence with the ability to influence senior stakeholders across Product, Engineering, Sales, and Marketing. Effective communication of complex technical concepts to diverse audiences including C-suite, product managers, and non-technical partners. Lead cross-functional initiatives to optimize end-to-end ML workflows, from data ingestion to model monitoring, reducing cycle times and increasing model velocity. Partner with engineering, product, and infrastructure teams to align roadmaps and eliminate friction in building, testing, and deploying AI solutions at scale. Passionate about building a data-driven culture and driving talent development across the organization. Nice to Have Experience in customer experience, CRM, service management, or sales tech domains. Hands-on exposure to LLM fine-tuning, prompt engineering, and GenAI application development. Hands-on experience developing and scaling ML pipelines and models using Databricks and related tools (e.g., MLflow,). etc Participation in AI ethics or responsible AI governance efforts. Open-source contributions or published research in relevant domains. Why Join Freshworks Shape the AI-first future of one of the fastest-growing SaaS companies globally. Build and ship data-driven solutions at scale, impacting 60,000+ businesses. Work with a global leadership team that values innovation, autonomy, and customer-centricity. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled MLOps Engineer to design, deploy, and manage machine learning pipelines in Google Cloud Platform (GCP). Your responsibilities will include automating ML workflows, optimizing model deployment, ensuring model reliability, and implementing CI/CD pipelines for ML systems. You will collaborate with technical teams to develop cutting-edge machine learning systems that drive business value. In this role, you will manage the deployment and maintenance of machine learning models in production environments, ensuring seamless integration with existing systems. You will monitor model performance using metrics such as accuracy, precision, recall, and F1 score, addressing issues like performance degradation, drift, or bias. Troubleshooting problems, maintaining documentation, and managing model versions for audit and rollback will be part of your routine tasks. Your duties will also involve analyzing monitoring data proactively to identify potential issues and providing regular performance reports to stakeholders. Additionally, you will focus on optimizing queries and pipelines, as well as modernizing applications when necessary. To qualify for this role, you should have expertise in programming languages like Python and SQL, along with a solid understanding of best MLOps practices for deploying enterprise-level ML systems. Familiarity with Machine Learning concepts, models, and algorithms, such as regression, clustering, and neural networks, including deep learning and transformers, is essential. Experience with GCP tools like BigQueryML, Vertex AI Pipelines, Model Versioning & Registry, Cloud Monitoring, and Kubernetes is preferred. Strong communication skills, both written and oral, are crucial, as you will be required to prepare detailed technical documentation for new and existing applications. You should demonstrate strong ownership and collaborative qualities in your domain, taking the initiative to identify and drive opportunities for improvement and process streamlining. A Bachelor's Degree in a quantitative field or equivalent job experience is required for this position. Experience in Azure MLOPS, familiarity with Cloud Billing, setting up or supporting NLP, Gen AI, LLM applications with MLOps features, and working in an Agile environment are considered bonus qualifications. If you are passionate about MLOps, have a knack for problem-solving, and enjoy working in a collaborative environment to deliver innovative machine learning solutions, we would like to hear from you.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies