Jobs
Interviews

93 Vertex Ai Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

7 - 13 Lacs

Hyderabad

Work from Office

Role - Machine Learning Engineer Required Skills & Experience 3+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting. Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts. Proven expertise in data blending, transformation, and feature engineering , adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data. Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable. Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R). Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision . Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch ). Familiarity with MLOps tools and practices , including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines. Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs. Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Interested candidates share cv to dikshith.nalapatla@motivitylabs.com

Posted 3 weeks ago

Apply

1.0 - 5.0 years

4 - 9 Lacs

Chennai

Work from Office

AI in Cloud Deployment, combining AI/ML expertise with cloud infrastructure and DevOps skills Job Title: Devops Engineer Location: Chennai Job Type: Full-Time Job Summary: We are looking for a highly skilled AI Cloud Deployment Engineer to join our team to build, deploy, and scale AI/ML models and pipelines in cloud environments. The ideal candidate has strong experience with cloud platforms (Azure, AWS, GCP), containerization, CI/CD, and MLOps best practices. You will work closely with data scientists, ML engineers, and DevOps teams to operationalize AI solutions. Key Responsibilities: Design and implement cloud-based deployment pipelines for AI/ML models using Azure, AWS. Collaborate with data scientists to containerize and deploy ML models using Docker and Kubernetes. Set up CI/CD pipelines for continuous model integration and deployment (MLOps). Monitor, troubleshoot, and optimize deployed AI services for performance and reliability. Manage scalable data pipelines and APIs using cloud-native technologies. Apply security best practices to model deployment and API access control. Automate infrastructure provisioning with tools like Terraform or ARM templates. Ensure reproducibility and versioning of ML experiments using tools like MLflow, DVC, or SageMaker. Build dashboards and logging/monitoring systems for AI applications. Required Skills : Hands-on experience in AI/ML model deployment in cloud environments (Azure ML, SageMaker, Vertex AI, etc.). Proficiency in cloud computing: AWS, Azure, or GCP. Experience with Docker, Kubernetes, and container orchestration. Strong understanding of MLOps , including model versioning, drift detection, and pipeline automation. Familiarity with CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or Azure DevOps. Good programming skills in Python (FastAPI for serving models),Java . Experience with infrastructure-as-code (Terraform, CloudFormation, Azure Bicep). Knowledge of cloud security, networking, and identity management. Preferred Qualifications: Bachelors or Masters degree in Computer Science, Data Science, or related field. Experience with ML tools like MLflow, Kubeflow, SageMaker Pipelines. Knowledge of data engineering frameworks (Airflow, Kafka, Spark). Prior experience working in Agile/Scrum environments. Nice to Have: Exposure to edge deployment for AI models (e.g., on IoT devices). Knowledge of LLMOps and deploying generative AI models in production. Familiarity with serverless compute services (AWS Lambda, Azure Functions).

Posted 4 weeks ago

Apply

12.0 - 16.0 years

25 - 40 Lacs

Pune

Work from Office

We are seeking a highly skilled and experienced GCP Cloud Architect to lead the design, development, and optimization of cloud-based solutions on the Google Cloud Platform. This role requires a deep understanding of cloud architecture, GCP services, and best practices for building scalable, secure, and efficient cloud environments. The ideal candidate will have hands-on experience with GCP tools, infrastructure automation, and cloud security. Must have skills GCP Expertise: In-depth knowledge of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. Infrastructure Automation: Proficiency with IaC tools like Terraform or GCP Deployment Manager. Cloud Security: Strong understanding of IAM, VPC, firewall configurations, and encryption in GCP. Programming Skills: Proficiency in Python, Go, or other programming languages for automation and scripting. Data Engineering Knowledge: Familiarity with data pipelines and services like Dataflow, Dataproc, or BigQuery. Monitoring and Observability: Hands-on experience with GCP monitoring tools and best practices. Problem-solving: Strong analytical and troubleshooting skills. Collaboration: Excellent communication and teamwork abilities to work effectively across teams. Roles and responsibilities Architecture Design: Design and develop robust and scalable cloud architectures leveraging GCP services to meet business requirements. Cloud Migration and Optimization: Lead cloud migration initiatives and optimize existing GCP environments for cost, performance, and reliability. Infrastructure as Code (IaC): Implement and maintain infrastructure automation using tools like Terraform or Deployment Manager. Integration with Cloud Services: Integrate various GCP services (e.g., BigQuery, Pub/Sub, Cloud Functions) into cohesive and efficient solutions. Observability and Monitoring: Set up monitoring, logging, and alerting using GCP-native tools like Cloud Monitoring and Cloud Logging. Security and Compliance: Ensure cloud environments adhere to industry security standards and compliance requirements, implementing IAM policies, encryption, and network security measures. Collaboration: Work with cross-functional teams, including DevOps, data engineers, and application developers, to deliver cloud solutions. Preferred Qualifications: GCP certifications (e.g., Professional Cloud Architect, Professional Data Engineer). Experience with container orchestration using Kubernetes (GKE). Knowledge of CI/CD pipelines and DevOps practices. Exposure to AI/ML services like Vertex AI.

Posted 1 month ago

Apply

4.0 - 6.0 years

20 - 22 Lacs

Bengaluru

Work from Office

Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets. Immediate Joiners.

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Bengaluru

Work from Office

( Immediate Joiners ) Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets.

Posted 1 month ago

Apply

4.0 - 8.0 years

1 - 1 Lacs

Ahmedabad

Work from Office

Were seeking a seasoned AI/ML Developer to join our tech team and help build next-generation GenAI products and scalable AI-powered APIs. You’ll architect and deploy intelligent workflows on Google Cloud Platform, integrate cutting-edge LLMs, and design real-time intent-detection systems across multiple user journeys. Key Responsibilities Model Development & Deployment: Build, train and deploy AI/ML models using GCP’s ADK and Vertex AI. Implement prompt-engineering strategies for large language models (e.g., Gemini). Vector Search & Retrieval: Design and maintain vector search pipelines on GCP. Integrate with LangChain and other tools for context-aware retrieval. API & Backend Engineering: Develop secure, scalable RESTful APIs using FastAPI and Python. Automate deployments via GitHub CI/CD workflows. Intent-Detection Systems: Architect multi-flow intent-detection pipelines with high accuracy. Continuously monitor and optimize model performance. Cloud Infrastructure & Security: Enforce best practices for secure, scalable GCP services. Collaborate with DevOps to ensure reliable, automated infrastructure. Collaboration & Documentation: Work closely with cross-functional teams (DevOps, Product, QA). Maintain clear technical documentation and share knowledge. Required Skills & Qualifications: AI/ML Expertise: 4+ years hands-on with AI/ML solutions in production. Proficiency with GCP services: Vertex AI, ADK, Firestore. Vector Search & LLM Integration: Experience building vector search pipelines on GCP. Strong familiarity with Gemini, LangChain, or similar LLM frameworks. Backend Development: Advanced Python skills; production experience with FastAPI. Solid understanding of REST principles, authentication, and security. DevOps & Automation: GitHub Actions (or equivalent) for CI/CD. Infrastructure as Code (Terraform, Deployment Manager, etc.). Intent Detection & NLP: Track record of building intent-classification or NLU systems. Familiarity with metrics and A/B testing for conversational AI. Soft Skills: Excellent problem-solving and analytical abilities. Strong communication skills and a collaborative mindset.

Posted 1 month ago

Apply

4.0 - 7.0 years

20 - 22 Lacs

Bengaluru

Work from Office

Job Description: AI/ML Gen AI Expert (4-7 Years Experience) Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets.

Posted 1 month ago

Apply

3.0 - 8.0 years

12 - 15 Lacs

Mumbai

Work from Office

Responsibilities: Develop and maintain data pipelines using GCP. Write and optimize queries in BigQuery. Utilize Python for data processing tasks. Manage and maintain SQL Server databases. Must-Have Skills: Experience with Google Cloud Platform (GCP). Proficiency in BigQuery query writing. Strong Python programming skills. Expertise in SQL Server. Good to Have: Knowledge of MLOps practices. Experience with Vertex AI. Background in data science. Familiarity with any data visualization tool.

Posted 1 month ago

Apply

10.0 - 15.0 years

8 - 10 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Job Title - Data Scientist and Analytics Level 7:Manager Ind & Func AI Decision Science Manager S&C Management Level:07 - Manager Location:Bangalore/Gurgaon/Hyderabad/Mumbai Must have skills: Technical (Python, SQL, ML and AI), Functional (Data Scientist and B2B Analytics preferably in Telco and S&P industries) Good to have skills:GEN AI, Agentic AI, cloud (AWS/Azure, GCP) Job Summary : About Global Network Data & AI:- Accenture Strategy & Consulting Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition About Comms & Media practice: The Accenture Center for Data and Insights (CDI) team helps businesses integrate data and AI into their operations to drive innovation and business growth by designing and implementing data strategies, generating actionable insights from data, and enabling clients to make informed decisions. In CDI, we leverage AI (predictive + generative), analytics, and automation to build innovative and practical solutions, tools and capabilities. The team is also working on building and socializing a Marketplace to democratize data and AI solutions within Accenture and for clients. Globally, CDI practice works across industry to develop value growth strategies for its clients and infuse AI & GenAI to help deliver top their business imperatives i.e., revenue growth & cost reduction. From multi-year Data & AI transformation projects to shorter more agile engagements, we have a rapidly expanding portfolio of hyper-growth clients and an increasing footprint with next-gen solutions and industry practices. Roles & Responsibilities: Experienced in Analytics in B2B domain. Responsible to help the clients with designing & delivering AI/ML solutions. He/she should be strong in Telco and S&P domain, AI fundamentals and should have good hands-on experience working with the following: Ability to work with large data sets and present conclusions to key stakeholders; Data management using SQL. Data manipulation and aggregation using Python. Propensity modeling using various ML algorithms. Text mining using NLP/AI techniques Propose solutions to the client based on gap analysis for the existing Telco platforms that can generate long term & sustainable value to the client. Gather business requirements from client stakeholders via interactions like interviews and workshops with all stakeholders Track down and read all previous information on the problem or issue in question. Explore obvious and known avenues thoroughly. Ask a series of probing questions to get to the root of a problem. Ability to understand the as-is process; understand issues with the processes which can be resolved either through Data & AI or process solutions and design detail level to-be state Understand customer needs and identify/translate them to business requirements (business requirement definition), business process flows and functional requirements and be able to inform the best approach to the problem. Adopt a clear and systematic approach to complex issues (i.e. A leads to B leads to C). Analyze relationships between several parts of a problem or situation. Anticipate obstacles and identify a critical path for a project. Independently able to deliver products and services that empower clients to implement effective solutions. Makes specific changes and improvements to processes or own work to achieve more. Work with other team members and make deliberate efforts to keep others up to date. Establish a consistent and collaborative presence with clients and act as the primary point of contact for assigned clients; escalate, track, and solve client issues. Partner with clients to understand end clients business goals, marketing objectives, and competitive constraints. Storytelling Crunch the data & numbers to craft a story to be presented to senior client stakeholders. Professional & Technical Skills: Overall 10+ years of experience in Data Science B.Tech Engineering from Tier 1 school or Msc in Statistics/Data Science from a Tier 1/Tier 2 Demonstrated experience in solving real-world data problems through Data & AI Direct onsite experience (i.e., experience of facing client inside client offices in India or abroad) is mandatory. Please note we are looking for client facing roles. Proficiency with data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience; knowledge of Advanced analytical fields in text mining, Image recognition, video analytics, IoT etc. Execution level understanding of econometric/statistical modeling packages Traditional techniques like Linear/logistic regression, multivariate statistical analysis, time series techniques, fixed/Random effect modelling. Machine learning techniques like - Random Forest, Gradient Boosting, XG boost, decision trees, clustering etc. Knowledge of Deep learning modeling techniques like RNN, CNN etc. Experience using digital & statistical modeling software Python (must), R, PySpark, SQL (must), BigQuery, Vertex AI Proficient in Excel, MS word, Power point, and corporate soft skills Knowledge of Dashboard creation platforms Excel, tableau, Power BI etc. Excellent written and oral communication skills with ability to clearly communicate ideas and results to non-technical stakeholders. Strong analytical, problem-solving skills and good communication skills Self-Starter with ability to work independently across multiple projects and set priorities Strong team player Proactive and solution oriented, able to guide junior team members. Execution knowledge of optimization techniques is a good-to-have Exact optimization Linear, Non-linear optimization techniques Evolutionary optimization Both population and search-based algorithms Cloud platform Certification, experience in Computer Vision are good-to-haves Qualification Experience: B.Tech Engineering from Tier 1 school or Msc in Statistics/Data Science from a Tier 1/Tier 2 Educational Qualification: B.tech or MSC in Statistics and Data Science

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant - GCP Sr Data Engineer We are seeking a highly accomplished and strategic Google Cloud Data Engineer with deep experience in data engineering, with a significant and demonstrable focus on the Google Cloud Platform (GCP). In this leadership role, you will be instrumental in defining and driving our overall data strategy on GCP, architecting transformative data solutions, and providing expert guidance to engineering teams. You will be a thought leader in leveraging GCP%27s advanced data services to solve complex business challenges, optimize our data infrastructure at scale, and foster a culture of data excellence. Responsibilities Define and champion the strategic direction for our data architecture and infrastructure on Google Cloud Platform, aligning with business objectives and future growth. Architect and oversee the development of highly scalable, resilient, and cost-effective data platforms and pipelines on GCP, leveraging services like BigQuery , Dataflow, Cloud Composer, DataProc , and more. Provide expert-level guidance and technical leadership to senior data engineers and development teams on best practices for data modeling, ETL/ELT processes, and data warehousing within GCP. Drive the adoption of cutting-edge GCP data technologies and methodologies to enhance our data capabilities and efficiency. Lead the design and implementation of comprehensive data governance frameworks, security protocols, and compliance measures within the Google Cloud environment. Collaborate closely with executive leadership, product management, data science, and analytics teams to translate business vision into robust and scalable data solutions on GCP. Identify and mitigate critical technical risks and challenges related to our data infrastructure and architecture on GCP. Establish and enforce data quality standards, monitoring systems, and incident response processes within the GCP data landscape. Mentor and develop senior data engineers, fostering their technical expertise and leadership skills within the Google Cloud context. Evaluate and recommend new GCP services and third-party tools to optimize our data ecosystem. Represent the data engineering team in strategic technical discussions and contribute to the overall technology roadmap. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s or Master%27s degree in Computer Science , Engineering, or a related field. experience in data engineering roles, with a significant and deep focus on the Google Cloud Platform. Expert-level knowledge of GCP%27s core data engineering services and best practices for building scalable and reliable solutions. Proven ability to architect and implement complex data warehousing and data lake solutions on GCP ( BigQuery , Cloud Storage). Mastery of SQL and extensive experience with programming languages relevant to data engineering on GCP (e.g., Python, Scala, Java). Deep understanding of data governance principles, security best practices within GCP (IAM, Security Command Center), and compliance frameworks (e.g., GDPR, HIPAA). Exceptional problem-solving, strategic thinking, and analytical skills, with the ability to navigate complex technical and business challenges. Outstanding communication, presentation, and influencing skills, with the ability to articulate complex technical visions to both technical and non-technical audiences, including executive leadership. Proven track record of leading and mentoring high-performing data engineering teams within a cloud- first environment. Preferred Qualifications/ Skills Google Cloud Certified Professional Data Engineer. Extensive experience with infrastructure-as-code tools for GCP (e.g., Terraform, Deployment Manager). Deep expertise in data streaming technologies on GCP (e.g., Dataflow, Pub/Sub, Apache Beam). Proven experience in integrating machine learning workflows and MLOps on GCP (e.g., Vertex AI). Significant contributions to open-source data projects or active participation in the GCP data engineering community. Experience in defining and implementing data mesh or data fabric architectures on GCP. Strong understanding of enterprise architecture principles and their application within the Google Cloud ecosystem. Experience in [mention specific industry or domain relevant to your company]. Demonstrated ability to drive significant technical initiatives and influence organizational data strategy. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

3.0 - 5.0 years

14 - 20 Lacs

Bengaluru

Work from Office

Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking a Lead Software Engineer within the Enterprise Application & Cloud Transformation team. In this role, you will: Lead complex technology Cloud initiatives including those that are companywide with broad impact. Act as a key contributor in automating the provisioning of Cloud Infrastructure using Infrastructure as a Code. Make decisions in developing standards and companywide best practices for engineering and large-scale technology solutions. Design, Optimization and Documentation of the Engineering aspects of the Cloud platform. Understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives. Review and analyze complex, large-scale technology solutions in Cloud for strategic business objectives and solving technical challenges that require in-depth evaluation of multiple parameters, including intangibles or unprecedented technical factors. Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals. Build and Enable cloud infrastructure, automate the orchestration of the entire GCP Cloud Platforms for Wells Fargo Enterprise. Working in a globally distributed team to provide innovative and robust Cloud centric solutions. Closely working with Product Team and Vendors to develop and deploy Cloud services to meet customer expectations. Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years working with GCP and a proven track record of building complex infrastructure programmatically with IaC tools. Must have 2+ years of hands-on experience with Infrastructure as Code tool Terraform and GitHub. Must have professional cloud certification on GCP. Proficient on container-based solution services, have handled at least 2-3 large scale Kubernetes based Infrastructure build out, provisioning of services GCP. Exposure to services like GKE, Cloud functions, Cloud Run, Cloud Build, Artifactory etc. Infrastructure and automation technologies: Orchestration, Harness, Terraform, Service Mesh, Kubernetes, API development, Test Driven Development Sound knowledge on the following areas with an expertise on one of them - 1. Proficient and have a thorough understanding of Cloud service offerings on Storage and Database. 2. Should have good understanding of networking, firewalls, load balancing concepts (IP, DNS, Guardrails, Vnets) and exposure to cloud security, AD, authentication methods, RBAC. 3. Proficient and have a thorough understanding of Cloud service offerings on Data, Analytics, AI/ML. Exposure to Analytics AIML services like BigQuery, Vertex AI, Data Proc etc. 4. Proficient and have a thorough understanding of Cloud service offerings on Security, Data Protection and Security policy implementations. Thorough understanding of landing zone and networking, Security best practices, Monitoring and logging, Risk and controls. Should have good understanding on Control Plane, Azure Arc and Google Anthos. Experience working in Agile environment and product backlog grooming against ongoing engineering work Enterprise Change Management and change control, experience working within procedural and process driven environment Desired Qualifications: Should have exposure to Cloud governance and logging/monitoring tools. Experience with Agile, CI/CD, DevOps concepts and SRE principles. Experience in scripting (Shell, Python, Go) Excellent verbal, written, and interpersonal communication skills. Ability to articulate technical solutions to both technical and business audiences Ability to deliver & engage with partners effectively in a multi-cultural environment by demonstrating co-ownership & accountability in a matrix structure. Delivery focus and willingness to work in a fast-paced, enterprise environment.

Posted 1 month ago

Apply

2.0 - 4.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Job Description: AI Engineer (Junior / Associate Level) About the Role We are looking for a passionate and hands-on AI Developer with around 3 years of experience in building and deploying machine learning models and working with the latest AI tools and frameworks. You will be working closely with our data science and engineering teams to develop smart, scalable, and production-ready AI solutions. Key Responsibilities Design, develop, test, and deploy machine learning models for classification, regression, recommendation, or NLP tasks. Work with state-of-the-art AI tools and libraries such as LangChain, Hugging Face, OpenAI, LlamaIndex, etc. Integrate Large Language Models (LLMs) into applications via APIs or custom fine- tuning. Build data pipelines for training and inference, ensuring model performance and robustness. Collaborate with software developers, product managers, and data engineers to turn AI models into usable products. Stay updated with the latest research and innovations in the AI/ML space and bring new ideas into development. Optimize model performance and scale AI solutions for production. Required Skills & Experience Bachelor's or Master's degree in Computer Science, AI, Data Science, or a related field. 2 to 4 years of hands-on experience in machine learning and deep learning. Proficiency in Python and libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy. Good understanding of LLMs and experience working with platforms like OpenAI, Claude, Cohere, or Hugging Face Transformers. Experience in prompt engineering, fine-tuning, or using frameworks like LangChain, LlamaIndex, or Haystack. Exposure to cloud platforms (AWS, GCP, Azure) and tools like Docker, Git, CI/CD workflows. Strong analytical and problem-solving skills. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Good to Have Experience building AI chatbots or virtual assistants. Knowledge of Vector databases (e.g., Chroma, Pinecone, Weaviate). Familiarity with MLOps tools like MLflow, Kubeflow, Vertex AI, etc. Experience with RESTful APIs or microservices architecture. Participation in Kaggle competitions or open-source AI projects. What We Offer Opportunity to work on real-world AI applications and products. A collaborative, learning-first environment. Exposure to enterprise-grade AI deployments and tools. Flexible working hours and remote work options. Competitive compensation and career growth path.

Posted 1 month ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 1 month ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 1 month ago

Apply

3.0 - 6.0 years

15 - 30 Lacs

Hyderabad

Remote

Role & responsibilities We are seeking a skilled Data Engineer to join our team and enhance the efficiency and accuracy of health claim fraud detection. This role involves designing, building, and optimizing data pipelines, integrating AI/ML models in Vertex AI, and improving data processing workflows to detect fraudulent claims faster and more accurately. Qualifications: Bachelors/Masters degree in Computer Science, Data Engineering, or a related field. 3+ years of experience in data engineering, preferably in the healthcare or financial sector. Strong experience with Google Cloud (GCP) services: Vertex AI, BigQuery, Dataflow, Pub/Sub, Dataproc. Expertise in SQL and Python for data processing and transformation. Experience with ML model deployment and monitoring in Vertex AI. Knowledge of ETL pipelines, data governance, and security best practices. Familiarity with healthcare data standards (HL7, FHIR) and compliance frameworks. Experience with Apache Beam, Spark, or Kafka is a plus. Preferred Qualifications: Experience in fraud detection models using AI/ML. Hands-on experience with ML Ops in GCP. Strong problem-solving skills and the ability to work in a cross-functional team

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Mumbai

Work from Office

Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities: Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masters Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles.

Posted 1 month ago

Apply

4.0 - 9.0 years

7 - 12 Lacs

Mumbai

Work from Office

Site Reliability Engineers (SREs) - Robust background in Google Cloud Platform (GCP) | RedHat OpenShift administration Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad, Secunderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch. Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI.LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization. Also experienced in developing model wrapers. Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning. Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 18 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Location:Bangalore/Gurgaon/Hyderabad/Mumbai Must have skills: Must have skills:Data Scientist / Transformation Leader & at least 5 years in Telecom Analytics Good to have skills:GEN AI, Agentic AI, Job Summary : About Global Network Data & AI:- Accenture Strategy & Consulting Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition About Comms & Media practice: Comms & Media (C&M) is one of the Industry Practices within Accentures S&C Global Network team. It focuses in serving clients across specific Industries Communications, Media & Entertainment. Communications Focuses primarily on industries related with telecommunications and information & communication technology (ICT). This team serves most of the worlds leading wireline, wireless, cable and satellite communications and service providers Media & Entertainment Focuses on industries like broadcast, entertainment, print and publishing Globally, Accenture Comms & Media practice works to develop value growth strategies for its clients and infuse AI & GenAI to help deliver top their business imperatives i.e., revenue growth & cost reduction. From multi-year Data & AI transformation projects to shorter more agile engagements, we have a rapidly expanding portfolio of hyper-growth clients and an increasing footprint with next-gen solutions and industry practices. Roles & Responsibilities: A Telco domain experienced and data science consultant is responsible to help the clients with designing & delivering AI solutions. He/she should be strong in Telco domain, AI fundamentals and should have good hands-on experience working with the following: Ability to work with large data sets and present conclusions to key stakeholders; Data management using SQL. Propose solutions to the client based on gap analysis for the existing Telco platforms that can generate long term & sustainable value to the client. Gather business requirements from client stakeholders via interactions like interviews and workshops with all stakeholders Track down and read all previous information on the problem or issue in question. Explore obvious and known avenues thoroughly. Ask a series of probing questions to get to the root of a problem. Ability to understand the as-is process; understand issues with the processes which can be resolved either through Data & AI or process solutions and design detail level to-be state Understand customer needs and identify/translate them to business requirements (business requirement definition), business process flows and functional requirements and be able to inform the best approach to the problem. Adopt a clear and systematic approach to complex issues (i.e. A leads to B leads to C). Analyze relationships between several parts of a problem or situation. Anticipate obstacles and identify a critical path for a project. Independently able to deliver products and services that empower clients to implement effective solutions. Makes specific changes and improvements to processes or own work to achieve more. Work with other team members and make deliberate efforts to keep others up to date. Establish a consistent and collaborative presence with clients and act as the primary point of contact for assigned clients; escalate, track, and solve client issues. Partner with clients to understand end clients business goals, marketing objectives, and competitive constraints. Storytelling Crunch the data & numbers to craft a story to be presented to senior client stakeholders. Professional & Technical Skills: Overall 10+ years of experience in Data Science & at least 5 years in Telecom Analytics Masters (MBA/MSc/MTech) from a Tier 1/Tier 2 and Engineering from Tier 1 school Demonstrated experience in solving real-world data problems through Data & AI Direct onsite experience (i.e., experience of facing client inside client offices in India or abroad) is mandatory. Please note we are looking for client facing roles. Proficiency with data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience; knowledge of Advanced analytical fields in text mining, Image recognition, video analytics, IoT etc. Execution level understanding of econometric/statistical modeling packages Traditional techniques like Linear/logistic regression, multivariate statistical analysis, time series techniques, fixed/Random effect modelling. Machine learning techniques like - Random Forest, Gradient Boosting, XG boost, decision trees, clustering etc. Knowledge of Deep learning modeling techniques like RNN, CNN etc. Experience using digital & statistical modeling software (one or more) Python, R, PySpark, SQL, BigQuery, Vertex AI Proficient in Excel, MS word, Power point, and corporate soft skills Knowledge of Dashboard creation platforms Excel, tableau, Power BI etc. Excellent written and oral communication skills with ability to clearly communicate ideas and results to non-technical stakeholders. Strong analytical, problem-solving skills and good communication skills Self-Starter with ability to work independently across multiple projects and set priorities Strong team player Proactive and solution oriented, able to guide junior team members. Execution knowledge of optimization techniques is a good-to-have Exact optimization Linear, Non-linear optimization techniques Evolutionary optimization Both population and search-based algorithms Cloud platform Certification, experience in Computer Vision are good-to-haves Qualification Experience: Overall 10+ years of experience in Data Science & at least 5 years in Telecom Educational Qualification: Masters (MBA/MSc/MTech) from a Tier 1/Tier 2 and Engineering from Tier 1 school

Posted 1 month ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Job Description As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail clientenabling consistency, modularity, observability, and readiness for GenAI-driven innovation. You’ll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI workloads. Responsibilities: Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or opensource stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/incontext learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle maintenance. Key Skills: Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and modelserving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across GCP, Azure, or AWS environments Qualifications & Experience: 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facing—always exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant - GCP AI Engineer! In this role, We are seeking an experienced and passionate GCP AI Engineer to join our team and drive innovation through the application of artificial intelligence solutions on the Google Cloud Platform (GCP). The ideal candidate will have a strong understanding of AI principles, cloud computing, and a track record of delivering impactful AI-driven projects. Responsibilities 1. GCP AI Solutions Development: Design and develop AI solutions using GCP AI services, including Vertex AI, BigQuery ML, and other specialized AI tools. 2. Data Preprocessing and Engineering: Preprocess and engineer large datasets using GCP data tools and techniques to prepare them for AI model training. 3. Model Training and Deployment: Train and deploy AI models on GCP, optimizing model performance and efficiency while ensuring scalability and reliability. 4. Algorithm Selection and Tuning: Select appropriate AI algorithms and hyperparameters for specific business problems, and tune models to achieve optimal performance. 5. Cloud Infrastructure Management: Provision and manage GCP cloud infrastructure, including virtual machines, containers, and storage systems, to support AI workloads. 6. Performance Monitoring and Optimization: Continuously monitor and evaluate AI models in production, identifying and addressing performance bottlenecks and opportunities for optimization. 7. Collaboration and Communication: Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to deliver end-to-end AI solutions. 8. Documentation and Knowledge Sharing: Document AI project methodologies, findings, and best practices, and share knowledge with the broader team to foster a culture of innovation. 9. Stay Up-to-Date : Keep abreast of the latest advancements in AI and GCP AI services, attending conferences, workshops, and training programs to enhance skills and knowledge. 10. Hands on experience developing solutions with GCP%27s ML APIs. Qualifications we seek in you! Minimum Qualification s . Bachelor&rsquos degree or equivalent experience in Computer Science, Artificial Intelligence, or a related field. . Relevant experience in AI development and deployment, with a focus on GCP AI services. . Strong programming skills in Python and proficiency with cloud-based AI frameworks and libraries. . Solid understanding of machine learning algorithms, including supervised and unsupervised learning, and deep learning architectures. . Experience with natural language processing, computer vision, or other specialized AI domains is a plus. . Familiarity with GCP data management tools and services, such as BigQuery , Cloud Storage, and Cloud Dataflow. Preferred Qualifications/ Skills . Excellent problem-solving skills and a strong analytical mindset. . Strong communication skills, both written and verbal, with the ability to explain technical concepts to non-technical stakeholders. . Ability to work independently and as part of a team in a fast-paced, dynamic environment. . Passion for AI and its potential to solve real-world problems using GCP technologies. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

5.0 - 10.0 years

30 - 37 Lacs

Pune

Remote

We are hiring Senior Machine Learning Engineer for an MNC Job Type : Direct , Fulltime role Location : PAN India (Remote) Senior Machine Learning Engineer Responsibilities Develop, implement, and maintain machine learning models using scikit-learn and SciPy. Build and deploy ML models for production use cases. Work with regression models and optimization techniques. Develop and integrate APIs using the Flask framework. Utilize GCP services (App Engine, Cloud Tasks, Dataflow, BigQuery, Bigtable, Vertex AI). Optimize CI/CD pipelines using Azure DevOps. Collaborate with cross-functional teams to deploy scalable ML solutions. Qualifications Strong proficiency in Python and core ML libraries Scikit-learn and SciPy. Hands-on experience with regression models and optimization techniques. Experience with Flask for API development and deployment. Proficiency in GCP services and ML operations on cloud infrastructure. Experience with CI/CD tools, especially Azure DevOps. Solid understanding of data structures, algorithms, and software engineering practices. Familiarity with Agile methodologies and technical documentation. Strong analytical, communication, and problem-solving skills.

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Key Responsibilities Design and implement GenAI architectures leveraging Google Cloud and Gemini AI models Lead solution architecture and integration of generative AI models into enterprise applications Collaborate with data scientists engineers and business stakeholders to define AI use cases and technical strategy Develop and optimize prompt engineering, model fine tuning, and deployment pipelines Design scalable data storage and retrieval layers using PostgreSQL BigQuery and vector databases e.g.Vertex AI Search Pinecone or FAISS Evaluate third party GenAI APIs and tools for integration Ensure compliance with data security privacy and responsible AI guidelines Support performance tuning monitoring and optimization of AI solutions in production Stay updated with evolving trends in GenAI and GCP offerings especially related to Gemini and Vertex AI Required Skills and Qualifications Proven experience architecting AI and ML or GenAI systems on Google Cloud Platform Hands-on experience with Google Gemini Vertex AI and related GCP AI tools Strong understanding of LLMs, prompt engineering and text generation frameworks Proficiency in PostgreSQL, including advanced SQL and performance tuning Experience with MLOps, CI and CD pipelines, and AI model lifecycle management Solid knowledge of Python, APIs, RESTful services, and cloud native architecture Familiarity with vector databases and semantic search concepts Strong communication and stakeholder management skills Preferred Qualifications GCP certifications e.g., Professional Cloud Architect Machine Learning Engineer Experience in model fine-tuning and custom LLM training Knowledge of LangChain, RAG Retrieval Augmented Generation frameworks Exposure to data privacy regulations GDPR, HIPAA, etc. Background in natural language processing NLP and deep learning

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 22 Lacs

Hyderabad

Work from Office

Role - Machine Learning Engineer Required Skills & Experience 5+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting. Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts. Proven expertise in data blending, transformation, and feature engineering , adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data. Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable. Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R). Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision . Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch ). Familiarity with MLOps tools and practices , including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines. Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs. Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Interested candidates share cv to dikshith.nalapatla@motivitylabs.com

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies