Jobs
Interviews

207 Vertex Ai Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Job Title: Senior Engineer Python & LLMs Company: Xebia Experience Required: 6 to 10 Years Employment Type: Full-Time Work Mode: Hybrid (3 days/week in office) Locations: Chennai, Bangalore, Hyderabad, Pune, Jaipur, Bhopal, Gurugram Job Description: Xebia is hiring a Senior Engineer Python & LLMs to join our AI & Engineering team. We're looking for passionate individuals experienced in Python, Kubernetes, and LLMs to help us build scalable, intelligent applications. Key Responsibilities: Develop backend services using Python in cloud platforms (AWS, Azure, GCP) Deploy and manage applications using Kubernetes and Docker Integrate LLMs (OpenAI, Vertex AI, Hugging Face, etc.) into solutions Design and fine-tune prompts for GenAI applications Collaborate across teams to drive AI-driven feature delivery Must-Have Skills: Strong Python skills for API/backend development Hands-on with Kubernetes, Docker, and CI/CD Experience with cloud-native deployments (AWS/Azure/GCP) Proficiency with LLMs and prompt engineering Nice to Have: Vector databases (Pinecone, Weaviate, FAISS) MLOps pipelines and orchestration experience Understanding of cloud cost, security, and performance trade-offs Important: Only Immediate Joiners or Candidates with 15 Days Notice Period Will Be Considered Why Join Xebia? Work on real-world GenAI initiatives Be part of a globally respected engineering culture Enjoy continuous learning, certifications, and team collaboration To Apply: Send your CV to vijay.s@xebia.com with the subject line "Senior Engineer Python & LLMs" and include the following: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Chennai / Bangalore / Hyderabad / Pune / Jaipur / Bhopal / Gurugram) Notice Period / Last Working Day (if serving) Only 15 days accepted Primary Skills LinkedIn Profile

Posted 2 months ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Gurugram

Work from Office

What were looking for: 4+ years of experience in Data Science 12 years of proven work in Machine Learning Minimum 2 years of experience in Deep Learning Proficiency in Python, Linux, and Containers (Docker/K8s) Hands-on with GenAI frameworks like LangChain, LlamaIndex, Hugging Face Transformers Experience deploying models using Vertex AI or Azure AI Currently contributing to a live project (public GitHub profile preferred) Key Responsibilities Deploy and manage AI workloads in hybrid cloud environments using RHEL AI and OpenShift AI, with guidance and training provided. Collaborate with teams to fine-tune and operationalize AI models, including large language models (LLMs), using tools like InstructLab. Build and maintain containerized applications with Kubernetes or similar platforms, adapting to OpenShift as needed. Support the development, training, and deployment of AI/ML models, leveraging frameworks like PyTorch or TensorFlow. Assist in implementing MLOps practices for model lifecycle management, with exposure to CI/CD pipelines. Troubleshoot and optimize Linux-based systems to ensure reliable AI performance. Learn and apply Red Hat-specific tools and best practices through on-the-job training and resources. Document workflows and contribute to Reve Clouds team knowledge sharing.

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Remote, , India

On-site

Spearhead innovative solution designs, leveraging cutting-edge cloud technologies to address complex client challenges and drive digital transformation Cultivate strong client relationships through exceptional advocacy skills, effectively communicating Techolution's value proposition and tailoring solutions to meet specific business needs Lead engaging presales activities, including product demonstrations, technical presentations, and proof-of-concept implementations to showcase our expertise and win new business Architect scalable and resilient cloud solutions on AWS/GCP/Azure platforms, ensuring optimal performance, security, and cost-efficiency for enterprise clients Apply a robust engineering mindset to troubleshoot complex technical issues, develop innovative workarounds, and continuously improve solution quality Drive ownership of client projects from inception to deployment, ensuring seamless integration of Techolution's solutions within existing client infrastructures Demonstrate an unbeatable work ethic by consistently delivering high-quality solutions on time and within budget, often exceeding client expectations Exhibit a passionate seeker mindset by staying abreast of emerging technologies and industry trends, proactively identifying opportunities for innovation and growth Facilitate knowledge transfer sessions with clients and internal teams, promoting best practices and fostering a culture of continuous learning and improvement Collaborate with cross-functional teams to align technical solutions with business objectives, ensuring maximum value delivery to clients

Posted 2 months ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - GCP Sr Data Engineer We are seeking a highly accomplished and strategic Google Cloud Data Engineer with over deep experience in data engineering, with a significant and demonstrable focus on the Google Cloud Platform (GCP). In this leadership role, you will be instrumental in defining and driving our overall data strategy on GCP, architecting transformative data solutions, and providing expert guidance to engineering teams. You will be a thought leader in leveraging GCP%27s advanced data services to solve complex business challenges, optimize our data infrastructure at scale, and foster a culture of data excellence. Responsibilities Define and champion the strategic direction for our data architecture and infrastructure on Google Cloud Platform, aligning with business objectives and future growth. Architect and oversee the development of highly scalable, resilient, and cost-effective data platforms and pipelines on GCP, leveraging services like BigQuery , Dataflow, Cloud Composer, DataProc , and more. Provide expert-level guidance and technical leadership to senior data engineers and development teams on best practices for data modeling, ETL/ELT processes, and data warehousing within GCP. Drive the adoption of cutting-edge GCP data technologies and methodologies to enhance our data capabilities and efficiency. Lead the design and implementation of comprehensive data governance frameworks, security protocols, and compliance measures within the Google Cloud environment. Collaborate closely with executive leadership, product management, data science, and analytics teams to translate business vision into robust and scalable data solutions on GCP. Identify and mitigate critical technical risks and challenges related to our data infrastructure and architecture on GCP. Establish and enforce data quality standards, monitoring systems, and incident response processes within the GCP data landscape. Mentor and develop senior data engineers, fostering their technical expertise and leadership skills within the Google Cloud context. Evaluate and recommend new GCP services and third-party tools to optimize our data ecosystem. Represent the data engineering team in strategic technical discussions and contribute to the overall technology roadmap. Qualifications we seek in you! Minimum Q ualifications / Skills Bachelor%27s or Master%27s degree in Computer Science , Engineering, or a related field. progressive and impactful experience in data engineering roles, with a significant and deep focus on the Google Cloud Platform. Expert-level knowledge of GCP%27s core data engineering services and best practices for building scalable and reliable solutions. Proven ability to architect and implement complex data warehousing and data lake solutions on GCP ( BigQuery , Cloud Storage). Mastery of SQL and extensive experience with programming languages relevant to data engineering on GCP (e.g., Python, Scala, Java). Deep understanding of data governance principles, security best practices within GCP (IAM, Security Command Center), and compliance frameworks (e.g., GDPR, HIPAA). Exceptional problem-solving, strategic thinking, and analytical skills, with the ability to navigate complex technical and business challenges. Outstanding communication, presentation, and influencing skills, with the ability to articulate complex technical visions to both technical and non-technical audiences, including executive leadership. Proven track record of leading and mentoring high-performing data engineering teams within a cloud- first environment. Preferred Q ualifications / Skills Google Cloud Certified Professional Data Engineer. Extensive experience with infrastructure-as-code tools for GCP (e.g., Terraform, Deployment Manager). Deep expertise in data streaming technologies on GCP (e.g., Dataflow, Pub/Sub, Apache Beam). Proven experience in integrating machine learning workflows and MLOps on GCP (e.g., Vertex AI). Significant contributions to open-source data projects or active participation in the GCP data engineering community. Experience in defining and implementing data mesh or data fabric architectures on GCP. Strong understanding of enterprise architecture principles and their application within the Google Cloud ecosystem. Experience in [mention specific industry or domain relevant to your company]. Demonstrated ability to drive significant technical initiatives and influence organizational data strategy. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 14 Lacs

Chennai

Work from Office

Build automated ML pipelines for training, validation, deployment, and monitoring using Vertex AI Pipelines, Kubeflow, or TFX Leverage Vertex AI Workbench, Training, and Experiments for reproducible, collaborative model development Deploy, manage, and monitor models using Vertex AI Model Registry, Model Monitoring, and Prediction Endpoints Collaborate with Data Scientists to productionize notebooks and prototypes into scalable ML services Monitor model performance, detect data/model drift, and ensure data quality using Vertex AI Monitoring Containerize and orchestrate ML workloads using Docker and Kubernetes (GKE) Build and maintain robust CI/CD pipelines using Cloud Build, Jenkins, or Bitbucket Pipelines Ensure strong version control, security, and compliance across the ML lifecycle Maintain comprehensive documentation, templates, and artifacts to enable reproducibility, governance, and fast onboarding

Posted 2 months ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mohali

Hybrid

We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities: Architecture & Strategy Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience: Core Skills Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.)

Posted 2 months ago

Apply

3.0 - 8.0 years

7 - 13 Lacs

Hyderabad

Work from Office

Role - Machine Learning Engineer Required Skills & Experience 3+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting. Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts. Proven expertise in data blending, transformation, and feature engineering , adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data. Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable. Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R). Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision . Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch ). Familiarity with MLOps tools and practices , including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines. Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs. Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Interested candidates share cv to dikshith.nalapatla@motivitylabs.com

Posted 2 months ago

Apply

1.0 - 5.0 years

4 - 9 Lacs

Chennai

Work from Office

AI in Cloud Deployment, combining AI/ML expertise with cloud infrastructure and DevOps skills Job Title: Devops Engineer Location: Chennai Job Type: Full-Time Job Summary: We are looking for a highly skilled AI Cloud Deployment Engineer to join our team to build, deploy, and scale AI/ML models and pipelines in cloud environments. The ideal candidate has strong experience with cloud platforms (Azure, AWS, GCP), containerization, CI/CD, and MLOps best practices. You will work closely with data scientists, ML engineers, and DevOps teams to operationalize AI solutions. Key Responsibilities: Design and implement cloud-based deployment pipelines for AI/ML models using Azure, AWS. Collaborate with data scientists to containerize and deploy ML models using Docker and Kubernetes. Set up CI/CD pipelines for continuous model integration and deployment (MLOps). Monitor, troubleshoot, and optimize deployed AI services for performance and reliability. Manage scalable data pipelines and APIs using cloud-native technologies. Apply security best practices to model deployment and API access control. Automate infrastructure provisioning with tools like Terraform or ARM templates. Ensure reproducibility and versioning of ML experiments using tools like MLflow, DVC, or SageMaker. Build dashboards and logging/monitoring systems for AI applications. Required Skills : Hands-on experience in AI/ML model deployment in cloud environments (Azure ML, SageMaker, Vertex AI, etc.). Proficiency in cloud computing: AWS, Azure, or GCP. Experience with Docker, Kubernetes, and container orchestration. Strong understanding of MLOps , including model versioning, drift detection, and pipeline automation. Familiarity with CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or Azure DevOps. Good programming skills in Python (FastAPI for serving models),Java . Experience with infrastructure-as-code (Terraform, CloudFormation, Azure Bicep). Knowledge of cloud security, networking, and identity management. Preferred Qualifications: Bachelors or Masters degree in Computer Science, Data Science, or related field. Experience with ML tools like MLflow, Kubeflow, SageMaker Pipelines. Knowledge of data engineering frameworks (Airflow, Kafka, Spark). Prior experience working in Agile/Scrum environments. Nice to Have: Exposure to edge deployment for AI models (e.g., on IoT devices). Knowledge of LLMOps and deploying generative AI models in production. Familiarity with serverless compute services (AWS Lambda, Azure Functions).

Posted 2 months ago

Apply

12.0 - 16.0 years

25 - 40 Lacs

Pune

Work from Office

We are seeking a highly skilled and experienced GCP Cloud Architect to lead the design, development, and optimization of cloud-based solutions on the Google Cloud Platform. This role requires a deep understanding of cloud architecture, GCP services, and best practices for building scalable, secure, and efficient cloud environments. The ideal candidate will have hands-on experience with GCP tools, infrastructure automation, and cloud security. Must have skills GCP Expertise: In-depth knowledge of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. Infrastructure Automation: Proficiency with IaC tools like Terraform or GCP Deployment Manager. Cloud Security: Strong understanding of IAM, VPC, firewall configurations, and encryption in GCP. Programming Skills: Proficiency in Python, Go, or other programming languages for automation and scripting. Data Engineering Knowledge: Familiarity with data pipelines and services like Dataflow, Dataproc, or BigQuery. Monitoring and Observability: Hands-on experience with GCP monitoring tools and best practices. Problem-solving: Strong analytical and troubleshooting skills. Collaboration: Excellent communication and teamwork abilities to work effectively across teams. Roles and responsibilities Architecture Design: Design and develop robust and scalable cloud architectures leveraging GCP services to meet business requirements. Cloud Migration and Optimization: Lead cloud migration initiatives and optimize existing GCP environments for cost, performance, and reliability. Infrastructure as Code (IaC): Implement and maintain infrastructure automation using tools like Terraform or Deployment Manager. Integration with Cloud Services: Integrate various GCP services (e.g., BigQuery, Pub/Sub, Cloud Functions) into cohesive and efficient solutions. Observability and Monitoring: Set up monitoring, logging, and alerting using GCP-native tools like Cloud Monitoring and Cloud Logging. Security and Compliance: Ensure cloud environments adhere to industry security standards and compliance requirements, implementing IAM policies, encryption, and network security measures. Collaboration: Work with cross-functional teams, including DevOps, data engineers, and application developers, to deliver cloud solutions. Preferred Qualifications: GCP certifications (e.g., Professional Cloud Architect, Professional Data Engineer). Experience with container orchestration using Kubernetes (GKE). Knowledge of CI/CD pipelines and DevOps practices. Exposure to AI/ML services like Vertex AI.

Posted 2 months ago

Apply

4.0 - 6.0 years

20 - 22 Lacs

Bengaluru

Work from Office

Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets. Immediate Joiners.

Posted 2 months ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Bengaluru

Work from Office

( Immediate Joiners ) Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets.

Posted 2 months ago

Apply

4.0 - 8.0 years

1 - 1 Lacs

Ahmedabad

Work from Office

Were seeking a seasoned AI/ML Developer to join our tech team and help build next-generation GenAI products and scalable AI-powered APIs. You’ll architect and deploy intelligent workflows on Google Cloud Platform, integrate cutting-edge LLMs, and design real-time intent-detection systems across multiple user journeys. Key Responsibilities Model Development & Deployment: Build, train and deploy AI/ML models using GCP’s ADK and Vertex AI. Implement prompt-engineering strategies for large language models (e.g., Gemini). Vector Search & Retrieval: Design and maintain vector search pipelines on GCP. Integrate with LangChain and other tools for context-aware retrieval. API & Backend Engineering: Develop secure, scalable RESTful APIs using FastAPI and Python. Automate deployments via GitHub CI/CD workflows. Intent-Detection Systems: Architect multi-flow intent-detection pipelines with high accuracy. Continuously monitor and optimize model performance. Cloud Infrastructure & Security: Enforce best practices for secure, scalable GCP services. Collaborate with DevOps to ensure reliable, automated infrastructure. Collaboration & Documentation: Work closely with cross-functional teams (DevOps, Product, QA). Maintain clear technical documentation and share knowledge. Required Skills & Qualifications: AI/ML Expertise: 4+ years hands-on with AI/ML solutions in production. Proficiency with GCP services: Vertex AI, ADK, Firestore. Vector Search & LLM Integration: Experience building vector search pipelines on GCP. Strong familiarity with Gemini, LangChain, or similar LLM frameworks. Backend Development: Advanced Python skills; production experience with FastAPI. Solid understanding of REST principles, authentication, and security. DevOps & Automation: GitHub Actions (or equivalent) for CI/CD. Infrastructure as Code (Terraform, Deployment Manager, etc.). Intent Detection & NLP: Track record of building intent-classification or NLU systems. Familiarity with metrics and A/B testing for conversational AI. Soft Skills: Excellent problem-solving and analytical abilities. Strong communication skills and a collaborative mindset.

Posted 2 months ago

Apply

4.0 - 7.0 years

20 - 22 Lacs

Bengaluru

Work from Office

Job Description: AI/ML Gen AI Expert (4-7 Years Experience) Role Overview: We are looking for a ML/AI Gen AI Expert with 4-7 years of experience for executing AI/GenAI use cases as POCs . Media domain experience (OTT, DTH, Web) is a plus. Key Responsibilities: Identify, define, and deliver AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Design, develop, and deploy models (ML and GenAI) using Google Clouds Vertex AI platform. Fine-tune and evaluate LLMs for domain-specific applications, ensuring responsible AI practices. Collaborate with data engineers and architects to ensure robust, scalable, and secure data pipelines feeding ML models. Document solutions, workflows, and experiments to support reproducibility, transparency, and handover readiness. Core Skills: Strong foundation in machine learning and deep learning, including supervised, unsupervised, and reinforcement learning. Hands-on experience with Vertex AI, including AutoML, Pipelines, Model Registry, and Generative AI Studio. Experience with LLMs and GenAI workflows, including prompt engineering, tuning, and evaluation. Python and ML frameworks proficiency (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Strong collaboration and communication skills to work cross-functionally with data, product, and business teams. Technical Skills: Vertex AI on Google Cloud Model training, deployment, endpoint management, and MLOps tooling. GenAI tools and APIs Hands-on with PaLM, Gemini, or other large language models via Vertex AI or open source. Python Proficient in scripting ML pipelines, data preprocessing, and model evaluation. ML/GenAI Libraries scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain. Cloud & DevOps Experience with GCP services (BigQuery, Cloud Functions, Cloud Storage), CI/CD for ML, and containerization (Docker/Kubernetes) Experience in the media domain (OTT, DTH, Web) and handling large-scale media datasets.

Posted 2 months ago

Apply

3.0 - 8.0 years

12 - 15 Lacs

Mumbai

Work from Office

Responsibilities: Develop and maintain data pipelines using GCP. Write and optimize queries in BigQuery. Utilize Python for data processing tasks. Manage and maintain SQL Server databases. Must-Have Skills: Experience with Google Cloud Platform (GCP). Proficiency in BigQuery query writing. Strong Python programming skills. Expertise in SQL Server. Good to Have: Knowledge of MLOps practices. Experience with Vertex AI. Background in data science. Familiarity with any data visualization tool.

Posted 2 months ago

Apply

10.0 - 15.0 years

8 - 10 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Job Title - Data Scientist and Analytics Level 7:Manager Ind & Func AI Decision Science Manager S&C Management Level:07 - Manager Location:Bangalore/Gurgaon/Hyderabad/Mumbai Must have skills: Technical (Python, SQL, ML and AI), Functional (Data Scientist and B2B Analytics preferably in Telco and S&P industries) Good to have skills:GEN AI, Agentic AI, cloud (AWS/Azure, GCP) Job Summary : About Global Network Data & AI:- Accenture Strategy & Consulting Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition About Comms & Media practice: The Accenture Center for Data and Insights (CDI) team helps businesses integrate data and AI into their operations to drive innovation and business growth by designing and implementing data strategies, generating actionable insights from data, and enabling clients to make informed decisions. In CDI, we leverage AI (predictive + generative), analytics, and automation to build innovative and practical solutions, tools and capabilities. The team is also working on building and socializing a Marketplace to democratize data and AI solutions within Accenture and for clients. Globally, CDI practice works across industry to develop value growth strategies for its clients and infuse AI & GenAI to help deliver top their business imperatives i.e., revenue growth & cost reduction. From multi-year Data & AI transformation projects to shorter more agile engagements, we have a rapidly expanding portfolio of hyper-growth clients and an increasing footprint with next-gen solutions and industry practices. Roles & Responsibilities: Experienced in Analytics in B2B domain. Responsible to help the clients with designing & delivering AI/ML solutions. He/she should be strong in Telco and S&P domain, AI fundamentals and should have good hands-on experience working with the following: Ability to work with large data sets and present conclusions to key stakeholders; Data management using SQL. Data manipulation and aggregation using Python. Propensity modeling using various ML algorithms. Text mining using NLP/AI techniques Propose solutions to the client based on gap analysis for the existing Telco platforms that can generate long term & sustainable value to the client. Gather business requirements from client stakeholders via interactions like interviews and workshops with all stakeholders Track down and read all previous information on the problem or issue in question. Explore obvious and known avenues thoroughly. Ask a series of probing questions to get to the root of a problem. Ability to understand the as-is process; understand issues with the processes which can be resolved either through Data & AI or process solutions and design detail level to-be state Understand customer needs and identify/translate them to business requirements (business requirement definition), business process flows and functional requirements and be able to inform the best approach to the problem. Adopt a clear and systematic approach to complex issues (i.e. A leads to B leads to C). Analyze relationships between several parts of a problem or situation. Anticipate obstacles and identify a critical path for a project. Independently able to deliver products and services that empower clients to implement effective solutions. Makes specific changes and improvements to processes or own work to achieve more. Work with other team members and make deliberate efforts to keep others up to date. Establish a consistent and collaborative presence with clients and act as the primary point of contact for assigned clients; escalate, track, and solve client issues. Partner with clients to understand end clients business goals, marketing objectives, and competitive constraints. Storytelling Crunch the data & numbers to craft a story to be presented to senior client stakeholders. Professional & Technical Skills: Overall 10+ years of experience in Data Science B.Tech Engineering from Tier 1 school or Msc in Statistics/Data Science from a Tier 1/Tier 2 Demonstrated experience in solving real-world data problems through Data & AI Direct onsite experience (i.e., experience of facing client inside client offices in India or abroad) is mandatory. Please note we are looking for client facing roles. Proficiency with data mining, mathematics, and statistical analysis Advanced pattern recognition and predictive modeling experience; knowledge of Advanced analytical fields in text mining, Image recognition, video analytics, IoT etc. Execution level understanding of econometric/statistical modeling packages Traditional techniques like Linear/logistic regression, multivariate statistical analysis, time series techniques, fixed/Random effect modelling. Machine learning techniques like - Random Forest, Gradient Boosting, XG boost, decision trees, clustering etc. Knowledge of Deep learning modeling techniques like RNN, CNN etc. Experience using digital & statistical modeling software Python (must), R, PySpark, SQL (must), BigQuery, Vertex AI Proficient in Excel, MS word, Power point, and corporate soft skills Knowledge of Dashboard creation platforms Excel, tableau, Power BI etc. Excellent written and oral communication skills with ability to clearly communicate ideas and results to non-technical stakeholders. Strong analytical, problem-solving skills and good communication skills Self-Starter with ability to work independently across multiple projects and set priorities Strong team player Proactive and solution oriented, able to guide junior team members. Execution knowledge of optimization techniques is a good-to-have Exact optimization Linear, Non-linear optimization techniques Evolutionary optimization Both population and search-based algorithms Cloud platform Certification, experience in Computer Vision are good-to-haves Qualification Experience: B.Tech Engineering from Tier 1 school or Msc in Statistics/Data Science from a Tier 1/Tier 2 Educational Qualification: B.tech or MSC in Statistics and Data Science

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant - GCP Sr Data Engineer We are seeking a highly accomplished and strategic Google Cloud Data Engineer with deep experience in data engineering, with a significant and demonstrable focus on the Google Cloud Platform (GCP). In this leadership role, you will be instrumental in defining and driving our overall data strategy on GCP, architecting transformative data solutions, and providing expert guidance to engineering teams. You will be a thought leader in leveraging GCP%27s advanced data services to solve complex business challenges, optimize our data infrastructure at scale, and foster a culture of data excellence. Responsibilities Define and champion the strategic direction for our data architecture and infrastructure on Google Cloud Platform, aligning with business objectives and future growth. Architect and oversee the development of highly scalable, resilient, and cost-effective data platforms and pipelines on GCP, leveraging services like BigQuery , Dataflow, Cloud Composer, DataProc , and more. Provide expert-level guidance and technical leadership to senior data engineers and development teams on best practices for data modeling, ETL/ELT processes, and data warehousing within GCP. Drive the adoption of cutting-edge GCP data technologies and methodologies to enhance our data capabilities and efficiency. Lead the design and implementation of comprehensive data governance frameworks, security protocols, and compliance measures within the Google Cloud environment. Collaborate closely with executive leadership, product management, data science, and analytics teams to translate business vision into robust and scalable data solutions on GCP. Identify and mitigate critical technical risks and challenges related to our data infrastructure and architecture on GCP. Establish and enforce data quality standards, monitoring systems, and incident response processes within the GCP data landscape. Mentor and develop senior data engineers, fostering their technical expertise and leadership skills within the Google Cloud context. Evaluate and recommend new GCP services and third-party tools to optimize our data ecosystem. Represent the data engineering team in strategic technical discussions and contribute to the overall technology roadmap. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s or Master%27s degree in Computer Science , Engineering, or a related field. experience in data engineering roles, with a significant and deep focus on the Google Cloud Platform. Expert-level knowledge of GCP%27s core data engineering services and best practices for building scalable and reliable solutions. Proven ability to architect and implement complex data warehousing and data lake solutions on GCP ( BigQuery , Cloud Storage). Mastery of SQL and extensive experience with programming languages relevant to data engineering on GCP (e.g., Python, Scala, Java). Deep understanding of data governance principles, security best practices within GCP (IAM, Security Command Center), and compliance frameworks (e.g., GDPR, HIPAA). Exceptional problem-solving, strategic thinking, and analytical skills, with the ability to navigate complex technical and business challenges. Outstanding communication, presentation, and influencing skills, with the ability to articulate complex technical visions to both technical and non-technical audiences, including executive leadership. Proven track record of leading and mentoring high-performing data engineering teams within a cloud- first environment. Preferred Qualifications/ Skills Google Cloud Certified Professional Data Engineer. Extensive experience with infrastructure-as-code tools for GCP (e.g., Terraform, Deployment Manager). Deep expertise in data streaming technologies on GCP (e.g., Dataflow, Pub/Sub, Apache Beam). Proven experience in integrating machine learning workflows and MLOps on GCP (e.g., Vertex AI). Significant contributions to open-source data projects or active participation in the GCP data engineering community. Experience in defining and implementing data mesh or data fabric architectures on GCP. Strong understanding of enterprise architecture principles and their application within the Google Cloud ecosystem. Experience in [mention specific industry or domain relevant to your company]. Demonstrated ability to drive significant technical initiatives and influence organizational data strategy. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

3.0 - 5.0 years

14 - 20 Lacs

Bengaluru

Work from Office

Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking a Lead Software Engineer within the Enterprise Application & Cloud Transformation team. In this role, you will: Lead complex technology Cloud initiatives including those that are companywide with broad impact. Act as a key contributor in automating the provisioning of Cloud Infrastructure using Infrastructure as a Code. Make decisions in developing standards and companywide best practices for engineering and large-scale technology solutions. Design, Optimization and Documentation of the Engineering aspects of the Cloud platform. Understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives. Review and analyze complex, large-scale technology solutions in Cloud for strategic business objectives and solving technical challenges that require in-depth evaluation of multiple parameters, including intangibles or unprecedented technical factors. Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals. Build and Enable cloud infrastructure, automate the orchestration of the entire GCP Cloud Platforms for Wells Fargo Enterprise. Working in a globally distributed team to provide innovative and robust Cloud centric solutions. Closely working with Product Team and Vendors to develop and deploy Cloud services to meet customer expectations. Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years working with GCP and a proven track record of building complex infrastructure programmatically with IaC tools. Must have 2+ years of hands-on experience with Infrastructure as Code tool Terraform and GitHub. Must have professional cloud certification on GCP. Proficient on container-based solution services, have handled at least 2-3 large scale Kubernetes based Infrastructure build out, provisioning of services GCP. Exposure to services like GKE, Cloud functions, Cloud Run, Cloud Build, Artifactory etc. Infrastructure and automation technologies: Orchestration, Harness, Terraform, Service Mesh, Kubernetes, API development, Test Driven Development Sound knowledge on the following areas with an expertise on one of them - 1. Proficient and have a thorough understanding of Cloud service offerings on Storage and Database. 2. Should have good understanding of networking, firewalls, load balancing concepts (IP, DNS, Guardrails, Vnets) and exposure to cloud security, AD, authentication methods, RBAC. 3. Proficient and have a thorough understanding of Cloud service offerings on Data, Analytics, AI/ML. Exposure to Analytics AIML services like BigQuery, Vertex AI, Data Proc etc. 4. Proficient and have a thorough understanding of Cloud service offerings on Security, Data Protection and Security policy implementations. Thorough understanding of landing zone and networking, Security best practices, Monitoring and logging, Risk and controls. Should have good understanding on Control Plane, Azure Arc and Google Anthos. Experience working in Agile environment and product backlog grooming against ongoing engineering work Enterprise Change Management and change control, experience working within procedural and process driven environment Desired Qualifications: Should have exposure to Cloud governance and logging/monitoring tools. Experience with Agile, CI/CD, DevOps concepts and SRE principles. Experience in scripting (Shell, Python, Go) Excellent verbal, written, and interpersonal communication skills. Ability to articulate technical solutions to both technical and business audiences Ability to deliver & engage with partners effectively in a multi-cultural environment by demonstrating co-ownership & accountability in a matrix structure. Delivery focus and willingness to work in a fast-paced, enterprise environment.

Posted 2 months ago

Apply

2.0 - 4.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Job Description: AI Engineer (Junior / Associate Level) About the Role We are looking for a passionate and hands-on AI Developer with around 3 years of experience in building and deploying machine learning models and working with the latest AI tools and frameworks. You will be working closely with our data science and engineering teams to develop smart, scalable, and production-ready AI solutions. Key Responsibilities Design, develop, test, and deploy machine learning models for classification, regression, recommendation, or NLP tasks. Work with state-of-the-art AI tools and libraries such as LangChain, Hugging Face, OpenAI, LlamaIndex, etc. Integrate Large Language Models (LLMs) into applications via APIs or custom fine- tuning. Build data pipelines for training and inference, ensuring model performance and robustness. Collaborate with software developers, product managers, and data engineers to turn AI models into usable products. Stay updated with the latest research and innovations in the AI/ML space and bring new ideas into development. Optimize model performance and scale AI solutions for production. Required Skills & Experience Bachelor's or Master's degree in Computer Science, AI, Data Science, or a related field. 2 to 4 years of hands-on experience in machine learning and deep learning. Proficiency in Python and libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy. Good understanding of LLMs and experience working with platforms like OpenAI, Claude, Cohere, or Hugging Face Transformers. Experience in prompt engineering, fine-tuning, or using frameworks like LangChain, LlamaIndex, or Haystack. Exposure to cloud platforms (AWS, GCP, Azure) and tools like Docker, Git, CI/CD workflows. Strong analytical and problem-solving skills. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Good to Have Experience building AI chatbots or virtual assistants. Knowledge of Vector databases (e.g., Chroma, Pinecone, Weaviate). Familiarity with MLOps tools like MLflow, Kubeflow, Vertex AI, etc. Experience with RESTful APIs or microservices architecture. Participation in Kaggle competitions or open-source AI projects. What We Offer Opportunity to work on real-world AI applications and products. A collaborative, learning-first environment. Exposure to enterprise-grade AI deployments and tools. Flexible working hours and remote work options. Competitive compensation and career growth path.

Posted 2 months ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 3 months ago

Apply

9.0 - 12.0 years

16 - 25 Lacs

Hyderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow

Posted 3 months ago

Apply

3.0 - 6.0 years

15 - 30 Lacs

Hyderabad

Remote

Role & responsibilities We are seeking a skilled Data Engineer to join our team and enhance the efficiency and accuracy of health claim fraud detection. This role involves designing, building, and optimizing data pipelines, integrating AI/ML models in Vertex AI, and improving data processing workflows to detect fraudulent claims faster and more accurately. Qualifications: Bachelors/Masters degree in Computer Science, Data Engineering, or a related field. 3+ years of experience in data engineering, preferably in the healthcare or financial sector. Strong experience with Google Cloud (GCP) services: Vertex AI, BigQuery, Dataflow, Pub/Sub, Dataproc. Expertise in SQL and Python for data processing and transformation. Experience with ML model deployment and monitoring in Vertex AI. Knowledge of ETL pipelines, data governance, and security best practices. Familiarity with healthcare data standards (HL7, FHIR) and compliance frameworks. Experience with Apache Beam, Spark, or Kafka is a plus. Preferred Qualifications: Experience in fraud detection models using AI/ML. Hands-on experience with ML Ops in GCP. Strong problem-solving skills and the ability to work in a cross-functional team

Posted 3 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Mumbai

Work from Office

Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities: Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Masters Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles.

Posted 3 months ago

Apply

4.0 - 9.0 years

7 - 12 Lacs

Mumbai

Work from Office

Site Reliability Engineers (SREs) - Robust background in Google Cloud Platform (GCP) | RedHat OpenShift administration Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification

Posted 3 months ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad, Secunderabad

Work from Office

Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch. Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI.LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization. Also experienced in developing model wrapers. Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning. Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies