Jobs
Interviews

207 Vertex Ai Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for building and maintaining robust machine learning pipelines in a cloud-based environment, ensuring efficient model deployment, monitoring, and lifecycle management. Your expertise in MLOps, specifically with Google Cloud Platform (GCP) and Vertex AI, will be essential. You should have a deep understanding of model performance drift detection and GPU accelerators. Your main tasks will include building and maintaining scalable MLOps pipelines in GCP Vertex AI for end-to-end machine learning workflows, managing the full MLOps lifecycle from data preprocessing to model monitoring and drift detection. Real-time model monitoring and drift detection will be crucial to ensure optimal model performance over time. You will be responsible for building and executing CICD containerization and orchestration tools, with hands-on experience in Jenkins, GitHub Pipelines, Docker, Kubernetes, and OpenShift. Optimizing model training and inference processes using GPU accelerators and CUDA will also be part of your role. Collaborating with cross-functional teams to automate and streamline machine learning model deployment and monitoring will be essential. Utilizing Python 3.10 with libraries such as pandas, NumPy, and TensorFlow for data processing and model development is required. Setting up infrastructure for continuous training, testing, and deployment of machine learning models while ensuring scalability, security, and high availability in all machine learning operations by implementing best practices in MLOps will be key to success. Preferred Candidate's Profile: - Experience: 8.5-12 Years (Lead Role: 12 Years+) - Experience in MLOps, building ML pipelines, and experience in GCP Vertex AI - Deep understanding of the MLOps lifecycle and automation of ML workflows - Proficiency in Python 3.10 and related libraries such as pandas, NumPy, and TensorFlow - Strong experience in GPU accelerators and CUDA for model training and optimization - Proven experience in model monitoring, drift detection, and maintaining model accuracy over time - Familiarity with CICD pipelines, Docker, Kubernetes, and cloud infrastructure - Strong problem-solving skills with the ability to work in a fast-paced environment - Experience with tools like Evidently AI for model monitoring and drift detection - Knowledge of data versioning and model version control techniques - Familiarity with TensorFlow Extended (TFX) or other ML workflow orchestration frameworks - Excellent communication and collaboration skills with the ability to work cross-functionally across teams (Note: The above job description is a summary of the responsibilities and requirements for this position. It is not exhaustive and may be subject to change based on the needs of the organization.),

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You are being hired for the position of SAS CitDev Associate Engineer at Bangalore, India. As an Analyst, your primary responsibility will be to design, develop, and maintain AI-powered chatbots and conversational systems using Dialogflow CX, Vertex AI, and Terraform. You should have strong programming skills in Python and expertise in deploying scalable AI models and infrastructure through Terraform and Google Cloud Platform (GCP). Collaboration with cross-functional teams is crucial to deliver intelligent and automated customer service solutions. Some of the benefits you will enjoy as part of the flexible scheme include the best in class leave policy, gender-neutral parental leaves, childcare assistance benefit reimbursement, sponsorship for industry-relevant certifications, Employee Assistance Program, comprehensive hospitalization insurance, accident and term life insurance, and complementary health screening for individuals aged 35 and above. Your key responsibilities will include designing and implementing AI-driven chatbots using Dialogflow CX and Vertex AI, developing and deploying conversational flows, intents, entities, and integrations, utilizing Terraform for managing cloud infrastructure, utilizing GCP services for deploying AI models, maintaining Python scripts for data processing, collaborating with data scientists for integrating ML models, ensuring data security, scalability, and reliability of AI systems, monitoring chatbot performance, and creating technical documentation for AI systems and infrastructure. To excel in this role, you should have proven experience with GCP services, strong programming skills in Python, experience with Terraform, hands-on experience with Dialogflow CX and Vertex AI, familiarity with deploying scalable AI models and infrastructure, excellent problem-solving skills, attention to detail, and the ability to collaborate effectively with cross-functional teams. You will receive training and development opportunities, coaching and support from experts in your team, and a culture of continuous learning to aid progression. The company, Deutsche Bank Group, promotes a positive, fair, and inclusive work environment and welcomes applications from all individuals. Visit the company website for further information: https://www.db.com/company/company.htm.,

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Do you want to help solve the world&aposs most pressing challenges Feeding the world&aposs growing population and slowing climate change are two of the world&aposs greatest challenges. AGCO is a part of the solution! Join us to make your contribution. As a Senior MLOps Engineer within the AI Platform Engineering team, you will help design, build, and scale the foundational capabilities that power AI across our enterprise. Youll work at the intersection of machine learning, software engineering, and DevOps, enabling AI delivery teams to reliably develop, deploy, and monitor models in production. Your Impact Design and implement robust, scalable MLOps pipelines to automate the end-to-end machine learning lifecyclefrom training to deployment to monitoring. Collaborate with data scientists, ML engineers, and platform engineers to operationalize AI models across cloud and hybrid environments, ensuring performance, traceability, and reliability. Build and maintain containerized training and inference environments, enabling reproducible ML workflows across projects and teams. Develop and optimize model CI/CD workflows using tools like GitHub Actions or Jenkins. Establish and monitor model observability metrics: latency, drift, data quality, and inference errors, across production pipelines. Support the integration of ML models into a variety of deployment targets, including internal applications, APIs, business dashboards, and hardware platforms onboard agricultural machinery. Collaborate closely with enterprise architecture, BI engineering, and product engineering teams to ensure seamless deployment across cloud, edge, and embedded systems. Serve as a technical mentor and contributor, helping define best practices around model deployment, monitoring, versioning, and rollback strategies. Drive adoption of the AI platforms tools and services, incorporating their feedback into roadmap improvements. Continuously explore and evaluate emerging tools in the MLOps ecosystem, recommending innovative ways to increase velocity, visibility, and scalability Your Experience And Qualifications 8+ years of experience in software, data, or ML engineering, including 5+ years in MLOps or operationalizing AI/ML workflows. Deep hands-on experience with Python, Spark/PySpark, SQL, and orchestration tools such as Airflow Proven experience with public cloud platforms, preferably GCP (Vertex AI, GKE, Cloud Run) or AWS. Skilled in Databricks, FastAPI, and containerized environments using Docker, Kubernetes Deep understanding of CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code (Terraform) Expertise with MLOps tools like MLflow, Kubeflow, or similar for model tracking, versioning, deployment, and monitoring. Proficient with model monitoring frameworks (e.g., Prometheus, Grafana) and implementing model drift, performance degradation, and rollback mechanisms Demonstrated ability to contribute to roadmap execution, and deliver outcomes. Self-motivated, proactive, and comfortable working both independently and collaboratively across engineering, data science, and business teams. Strong communication and stakeholder engagement skills, with a focus on impact and execution Your Benefits GLOBAL DIVERSITY Diversity means many things to us, different brands, cultures, nationalities, genders, generations even variety in our roles. You make us unique! ENTERPRISING SPIRIT- Every role adds value. We&aposre committed to helping you develop and grow to realize your potential. POSITIVE IMPACT Make it personal and help us feed the world. INNOVATIVE TECHNOLOGIES - You can combine your love for technology with manufacturing excellence and work alongside teams of people worldwide who share your enthusiasm. MAKE THE MOST OF YOU Benefits include health care and wellness plans and flexible and virtual work option. Your Workplace We value inclusion and recognize the innovation a diverse workforce delivers to our farmers. Through our recruitment efforts, we are committed to building a team that includes a variety of experiences, backgrounds, cultures and perspectives. Join us as we bring agriculture into the future and apply now! Please note that this job posting is not designed to cover or contain a comprehensive listing of all required activities, duties, responsibilities, or benefits and may change at any time with or without notice. ? AGCO is proud to be an Equal Opportunity Employer Show more Show less

Posted 1 month ago

Apply

10.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Do you want to help solve the world&aposs most pressing challenges Feeding the world&aposs growing population and slowing climate change are two of the world&aposs greatest challenges. AGCO is a part of the solution! Join us to make your contribution. As an AI Platform Architect, you will define and evolve the architecture of AGCOs AI platform, designing the technical foundation that empowers teams to deliver AI solutions efficiently, securely, and with confidence. Your work will shape how ML models move from experimentation to production, how AI platform services are consumed across teams, and how platform capabilities scale to support advanced use cases on cloud and edge deployments, including onboard our machines in the field. Your Impact Define the reference architecture for AGCOs AI platform, covering AI/ML data pipeline platforms, model training infrastructure, CI/CD for ML, artifact management, observability, and self-service developer tools. Ensure platform services are scalable, auditable, and cost-efficient across heterogeneous workloads, e.g., computer vision, GenAI, machine learning, etc. Design core platform services such as containerized training environments, experiment tracking, model registries, and reusable orchestration patterns. Architect integration interfaces (API/CLI/UI) that allow AI delivery teams to self-serve platform capabilities reliably and securely. Collaborate with Enterprise Architecture, AI PODs and Product Engineering teams to ensure interoperability across systems. Support model deployment across cloud, internal APIs, dashboards, and embedded systems in agricultural machinery. Establish technical guardrails for reusability, performance, and lifecycle management of models and agents. Serve as a technical leader and advisor across teams, contributing to strategy, roadmap, and engineering excellence Your Experience And Qualifications 10+ years of experience in Software-, ML infrastructure- or Platform engineering, including 3+ years in AI platform architecture. Proven success designing and deploying enterprise-grade ML infrastructure and AI platforms Deep expertise in cloud-native technologies and principles (GCP), e.g. Vertex AI, Cloud Run, GKE, Pub/Sub and Artifact Registry as well as automation, elasticity and resilience by default Experience with CI/CD for ML using tools like GitHub Actions, Kubeflow, and Terraform. Strong knowledge of containerization, reproducibility, and secure environment management (e.g. Kubernetes, AWS ECS, Azure Service Fabric and Docker) Deep understanding of model lifecycle management, including training, versioning, deployment, and monitoring. Familiarity with data and ML orchestration tools (e.g., Airflow), feature stores, and dataset management systems. Excellent systems thinking and architectural design skills, with the ability to design for modularity, scalability, and maintainability. Proven ability to work cross-functionally and influence technical direction across engineering and business units Your Benefits GLOBAL DIVERSITY Diversity means many things to us, different brands, cultures, nationalities, genders, generations even variety in our roles. You make us unique! ENTERPRISING SPIRIT- Every role adds value. We&aposre committed to helping you develop and grow to realize your potential. POSITIVE IMPACT Make it personal and help us feed the world. INNOVATIVE TECHNOLOGIES - You can combine your love for technology with manufacturing excellence and work alongside teams of people worldwide who share your enthusiasm. MAKE THE MOST OF YOU Benefits include health care and wellness plans and flexible and virtual work option. Your Workplace We value inclusion and recognize the innovation a diverse workforce delivers to our farmers. Through our recruitment efforts, we are committed to building a team that includes a variety of experiences, backgrounds, cultures and perspectives. Join us as we bring agriculture into the future and apply now! Please note that this job posting is not designed to cover or contain a comprehensive listing of all required activities, duties, responsibilities, or benefits and may change at any time with or without notice. AGCO is proud to be an Equal Opportunity Employer Show more Show less

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Description Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. Requirements Bachelor&aposs or master&aposs degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps And CI/CD Pipeline Development Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security And IAM Management Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring And Optimization Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are looking for Data Solution Architect to join FC India IT Architecture team. In this role, you will define analytics solutions and guide engineering teams to implement big data solutions on the cloud. Work involves migrating data from legacy on-prem warehouses to Google cloud data platform. This role will provide architecture assistance to data engineering teams in India, with key responsibility of supporting applications globally. This role will also drive business adoption of the new platform and sunset of legacy platforms. Responsibilities Utilize Google Cloud Platform & Data Services to modernize legacy applications. Understand technical business requirements and define architecture solutions that align to Ford Motor & Credit Companies Patterns and Standards. Collaborate and work with global architecture teams to define analytics cloud platform strategy and build Cloud analytics solutions within enterprise data factory. Provide Architecture leadership in design & delivery of new Unified data platform on GCP. Understand complex data structures in analytics space as well as interfacing application systems. Develop and maintain conceptual, logical & physical data models. Design and guide Product teams on Subject Areas and Data Marts to deliver integrated data solutions. Provide architectural guidance for optimal solutions considering regional Regulatory needs. Provide architecture assessments on technical solutions and make recommendations that meet business needs and align with architectural governance and standard. Guide teams through the enterprise architecture processes and advise teams on cloud-based design, development, and data mesh architecture. Provide advisory and technical consulting across all initiatives including PoCs, product evaluations and recommendations, security, architecture assessments, integration considerations, etc. Leverage cloud AI/ML Platforms to deliver business and technical requirements. Qualifications Google Professional Solution Architect certification. 8+ years of relevant work experience in analytics application and data architecture, with deep understanding of cloud hosting concepts and implementations. 5+ years experience in Data and Solution Architecture in analytics space. Solid knowledge of cloud data architecture, data modelling principles, and expertise in Data Modeling tools. Experience in migrating legacy analytics applications to Cloud platform and business adoption of these platforms to build insights and dashboards through deep knowledge of traditional and cloud Data Lake, Warehouse and Mart concepts. Good understanding of domain driven design and data mesh principles. Experience with designing, building, and deploying ML models to solve business challenges using Python/BQML/Vertex AI on GCP. Knowledge of enterprise frameworks and technologies. Strong in architecture design patterns, experience with secure interoperability standards and methods, architecture tolls and process. Deep understanding of traditional and cloud data warehouse environment, with hands on programming experience building data pipelines on cloud in a highly distributed and fault-tolerant manner. Experience using Dataflow, pub/sub, Kafka, Cloud run, cloud functions, Bigquery, Dataform, Dataplex , etc. Strong understanding on DevOps principles and practices, including continuous integration and deployment (CI/CD), automated testing & deployment pipelines. Good understanding of cloud security best practices and be familiar with different security tools and techniques like Identity and Access Management (IAM), Encryption, Network Security, etc. Strong understanding of microservices architecture. Nice to Have Bachelors degree in Computer science/engineering, Data science or related field. Strong leadership, communication, interpersonal, organizing, and problem-solving skills Good presentation skills with ability to communicate architectural proposals to diverse audiences (user groups, stakeholders, and senior management). Experience in Banking and Financial Regulatory Reporting space. Ability to work on multiple projects in a fast paced & dynamic environment. Exposure to multiple, diverse technologies, platforms, and processing environments. Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

To deliver faster, smarter, and better business outcomes for clients & partners, powered by AI, we are seeking an entrepreneurial technology leader who can challenge the status quo and inspire innovation with a consulting mindset and business strategy skills. This role involves overseeing the delivery of Applied AI Engineering and embodying the DNA of Client Success & Delivery. We are looking for individuals with a relentless passion for iterative improvement, the ability to architect solutions, research, code, and rebuild to generate tangible outcomes that matter to businesses. We are not seeking mere delegators, budget owners, agency managers, or coordinators. We want hard-core geeks turned engineering managers turned management consultants who are hands-on techies interested in creating impactful solutions beyond traditional approaches. This role demands individuals who are not afraid to take bold bets, drive meaningful change, and create a leveraged impact by solving problems differently. As a strategic leader in Applied AI engineering, you will be responsible for developing and executing a roadmap with a product engineering mindset, leading a high-performance team of developers, architects, and engineering managers. Your role includes hiring, training, coaching, mentoring, and guiding the team towards success. Collaboration with architects, product management, and engineering teams is crucial to create valuable solutions that enhance the platform's offerings. You will own the end-to-end delivery of Applied AI projects, ensuring adherence to timelines, budget, and engineering quality. Your responsibilities will involve managing progress meetings, addressing project issues promptly, and communicating effectively with stakeholders. Additionally, you will participate in developing a long-term technology roadmap, influence leaders, and drive a culture of continuous improvement within the organization. While we believe education is important, we value experience and practical skills more. A Bachelor's or Master's degree in engineering from a reputed institute is preferred, along with at least 5 years of leadership experience in overseeing Applied AI engineering teams. A successful track record in Solution Architecting and AI-based software development is essential. Proficiency in Generative AI, Google AI APIs, Machine Learning, and various software development technologies is required. Excellent communication skills, the ability to deliver complex software projects efficiently, and a knack for thriving in a dynamic environment are key attributes for this role. Experience in client-facing roles, relationship building, and a deep understanding of technology stacks including Python, Pyspark, TensorFlow, databases like MongoDB and Google Cloud SQL, and Google AI platforms such as App Engine, GKE, Vertex AI, Doc AI, and Conversational AI are important for success in this position.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

The role of warehousing and logistics systems is becoming increasingly crucial in enhancing the competitiveness of various companies and contributing to the overall efficiency of the global economy. Modern intra-logistics solutions integrate cutting-edge mechatronics, sophisticated software, advanced robotics, computational perception, and AI algorithms to ensure high throughput and streamlined processing for critical commercial logistics functions. Our Warehouse Execution Software is designed to optimize intralogistics and warehouse automation by utilizing advanced optimization techniques. By synchronizing discrete logistics processes, we have created a real-time decision engine that maximizes labor and equipment efficiency. Our software empowers customers with operational agility essential for meeting the demands of an Omni-channel environment. We are seeking a dynamic individual who can develop state-of-the-art MLOps and DevOps frameworks for AI model deployment. The ideal candidate should possess expertise in cloud technologies, deployment architectures, and software production standards. Moreover, effective collaboration within interdisciplinary teams is key to successfully guiding products through the development cycle. **Core Job Responsibilities:** - Develop comprehensive pipelines covering the ML lifecycle from data ingestion to model evaluation. - Collaborate with AI scientists to expedite the operationalization of ML algorithms. - Establish CI/CD/CT pipelines for ML algorithms. - Implement model deployment both in cloud and on-premises edge environments. - Lead a team of DevOps/MLOps engineers. - Stay updated on new tools, technologies, and industry best practices. **Key Qualifications:** - Master's degree in Computer Science, Software Engineering, or a related field. - Proficiency in Cloud Platforms, particularly GCP, and relevant skills like Docker, Kubernetes, and edge computing. - Familiarity with task orchestration tools such as MLflow, Kubeflow, Airflow, Vertex AI, and Azure ML. - Strong programming skills, preferably in Python. - Robust DevOps expertise including Linux/Unix, testing, automation, Git, and build tools. - Knowledge of data engineering tools like Beam, Spark, Pandas, SQL, and GCP Dataflow is advantageous. - Minimum 5 years of experience in relevant fields, including academic exposure. - At least 3 years of experience in managing a DevOps/MLOps team.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and motivated Technical Lead who will be joining our growing team. Your role will involve leading and delivering complex technical projects in areas such as AI/ML, Full Stack Development, and Cloud-based solutions. Your responsibilities will include overseeing the end-to-end execution of multiple software development projects, focusing on AI/ML initiatives and products to ensure timely delivery and high quality. You will be responsible for architecture design, technical planning, and code quality across the team, specifically for scalable AI/ML solutions, robust data pipelines, and integration of models into production systems. Collaboration with stakeholders, both internal and external, will be a key part of your role. You will gather requirements for AI/ML features, provide progress updates, and effectively manage expectations. Mentoring and guiding developers to foster a culture of continuous improvement and technical excellence, especially in AI/ML best practices, model development, and ethical AI considerations, will be essential. You will work closely with cross-functional teams, including QA, DevOps, and UI/UX designers, to seamlessly integrate AI/ML models and applications into broader systems. Implementing best practices in development, deployment, and version control with a strong emphasis on MLOps and reproducible AI/ML workflows is crucial. Tracking project milestones, managing technical risks, and ensuring that AI/ML projects align with overarching business goals will be part of your responsibilities. Participating in client calls to provide technical insights and solution presentations, demonstrating the value and capabilities of our AI/ML offerings, will be required. Driving research, experimentation, and adoption of cutting-edge AI/ML algorithms and techniques to enhance product capabilities is also expected from you. Required Skills: - Strong hands-on experience in at least one Fullstack framework (e.g., MERN stack, Python with React). - Proven experience managing and delivering end-to-end AI/ML projects or products. - Proficiency in major AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn. - Solid experience with data processing, feature engineering, and data pipeline construction for machine learning workloads. - Proficiency in project tracking tools like Jira, Trello, or Asana. - Solid understanding of SDLC, Agile methodologies, and CI/CD practices. - Strong knowledge of cloud platforms like AWS, Azure, or GCP, especially their AI/ML services. - Excellent problem-solving, communication, and leadership skills. Preferred Qualifications: - Bachelor's or Master's degree in a related field. - Experience with containerization technologies and microservices architecture. - Exposure to MLOps practices and tools. - Prior experience in a client-facing technical leadership role. - Familiarity with big data technologies. - Contributions to open-source AI/ML projects or relevant publications are a plus. (ref:hirist.tech),

Posted 1 month ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Capability Lead GCP Location: Bengaluru / Hyderabad / Chennai Job Summary We are seeking a dynamic and visionary Capability Lead GCP to lead and scale our Google Cloud Platform practice. This is a senior leadership role that involves working directly with vertical business leaders to shape and deliver data-driven cloud strategies. The ideal candidate will bring a strong blend of technical depth, strategic thinking, and the ability to provide solutions that align with organizational goals. This role is critical to driving digital transformation across the enterprise and will act as a key advisor to senior stakeholders. Key Responsibilities Act as a strategic partner to vertical and business unit leaders, helping shape their cloud data strategy and providing tailored GCP-based solutions to solve business challenges. Own and lead the Data & Analytics capability on GCP, including strategy, delivery governance, presales, and innovation. Design and implement cloud-native data architectures using GCP services such as BigQuery, Looker, Dataflow, Pub/Sub, and Vertex AI. Lead client engagements end-to-end, including stakeholder workshops, solution design, proposals, estimations, and proof of concepts (POCs). Collaborate with internal business teams and Google Cloud to define go-to-market strategies and reusable solution accelerators. Guide organisations through cloud transformation journeys with an emphasis on data modernisation, governance, and scalable analytics platforms. Build, lead, and mentor a team of engineers, architects, and consultants, promoting continuous learning and GCP certifications. Stay ahead of emerging GCP trends and influence internal and external stakeholders on adoption strategies. Show more Show less

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

As a Senior Machine Learning Engineer with over 10 years of experience, you will play a crucial role in designing, building, and deploying scalable machine learning systems in production. In this role, you will collaborate closely with data scientists to operationalize models, take ownership of ML pipelines from end to end, and enhance the reliability, automation, and performance of our ML infrastructure. Your primary responsibilities will include designing and constructing robust ML pipelines and services for training, validation, and model deployment. You will work in collaboration with various stakeholders such as data scientists, solution architects, and DevOps engineers to ensure alignment with project goals and requirements. Additionally, you will be responsible for ensuring cloud integration compatibility with AWS and Azure, building reusable infrastructure components following best practices in DevOps and MLOps, and adhering to security standards and regulatory compliance. To excel in this role, you should possess strong programming skills in Python, have deep experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn, and be proficient in MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience in deploying models using Docker and Kubernetes, familiarity with cloud platforms and ML services, and proficiency in data engineering tools are essential for success in this position. Additionally, knowledge of CI/CD, version control, and infrastructure as code along with experience in monitoring/logging tools will be advantageous. Good-to-have skills include experience with feature stores and experiment tracking platforms, knowledge of edge/embedded ML, model quantization, and optimization, as well as familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases and experience in leading cross-functional initiatives or mentoring junior engineers will also be beneficial. Joining Ericsson will provide you with an exceptional opportunity to leverage your skills and creativity to address some of the world's toughest challenges. You will be part of a diverse team of innovators who are committed to pushing the boundaries of innovation and crafting groundbreaking solutions. As a member of this team, you will be challenged to think beyond conventional limits and contribute to shaping the future of technology.,

Posted 1 month ago

Apply

15.0 - 22.0 years

0 Lacs

karnataka

On-site

You will be responsible for solution design, client engagement, and delivery oversight, along with senior stakeholder management. Your role will involve leading and driving Google Cloud solutioning for customer requirements, RFPs, proposals, and delivery. You will establish governance frameworks, delivery methodologies, and reusable assets to scale the practice. It is essential to have the ability to take initiative and deliver in challenging engagements spread across multiple geographies. Additionally, you will lead the development of differentiated capabilities and offerings in areas such as application modernization & migration, cloud-native development, and AI agents. Collaboration with sales and pre-sales teams to shape solutions and win strategic deals, including large-scale application modernization and migrations will be a key aspect of your role. You will spearhead Google Cloud's latest products & services like AgentSpace, AI Agent development using GCP-native tools such as ADK, A2A Protocol, and Model Context Protocol. Building and mentoring a high-performing team of cloud architects, engineers, and consultants will also be part of your responsibilities. Driving internal certifications, specialization audits, and partner assessments to maintain Google Cloud Partner status is crucial. Representing the organization in partner forums, webinars, and industry/customer events is also expected from you. To qualify for this role, you should have 15+ years of experience in IT, with at least 3 years in Google Cloud applications architecture, design, and solutioning. Additionally, a minimum of 5 years of experience in designing and developing Java applications/platforms is required. Deep expertise in GCP services including Compute, Storage, BigQuery, Cloud Functions, Anthos, and Vertex AI is essential. Proven experience in leading Google Cloud transformation programs, strong solution architecture, and implementation experience for Google Cloud modernization & migration programs are important qualifications. Strong experience in stakeholder and client management is also necessary. Being Google Cloud certified with the Google Cloud Professional architect certification, self-motivated to quickly learn new technologies and platforms, and possessing excellent presentation and communication skills are crucial for this role. Preferred qualifications include Google Cloud Professional certifications (Cloud Architect), experience with partner ecosystems, co-selling with Google, and managing joint GTM motions, as well as exposure to regulated industries (e.g., BFSI, Healthcare) and global delivery models.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

jaipur, rajasthan

On-site

As an AI / ML Engineer, you will be responsible for utilizing your expertise in the field of Artificial Intelligence and Machine Learning to develop innovative solutions. You should hold a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or a related field. With a minimum of 6 years of experience in AI/ML, you are expected to demonstrate proficiency in Python and various ML libraries such as scikit-learn, XGBoost, pandas, NumPy, matplotlib, and seaborn. In this role, you will need a strong understanding of machine learning algorithms and deep learning architectures including CNNs, RNNs, and Transformers. Hands-on experience with TensorFlow, PyTorch, or Keras is essential. You should also have expertise in data preprocessing, feature selection, exploratory data analysis (EDA), and model interpretability. Additionally, familiarity with API development and deploying models using frameworks like Flask, FastAPI, or similar tools is required. Experience with MLOps tools such as MLflow, Kubeflow, DVC, and Airflow will be beneficial. Knowledge of cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML is preferred. Proficiency in version control using Git, CI/CD processes, and containerization with Docker is essential for this role. Bonus skills that would be advantageous include familiarity with NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers), Computer Vision experience using OpenCV or YOLO/Detectron, and knowledge of Reinforcement Learning or Generative AI (GANs, LLMs). Experience with vector databases such as Pinecone or Weaviate, as well as LangChain for AI agent building, is a plus. Familiarity with data labeling platforms and annotation workflows will also be beneficial. In addition to technical skills, you should possess soft skills such as an analytical mindset, strong problem-solving abilities, effective communication, and collaboration skills. The ability to work independently in a fast-paced, agile environment is crucial. A passion for AI/ML and a proactive approach to staying updated with the latest developments in the field are highly desirable for this role.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be working as an AI Platform Engineer in Bangalore as part of the GenAI COE Team. Your key responsibilities will involve developing and promoting scalable AI platforms for customer-facing applications. It will be essential to evangelize the platform with customers and internal stakeholders, ensuring scalability, reliability, and performance to meet business needs. Your role will also entail designing machine learning pipelines for experiment management, model management, feature management, and model retraining. Implementing A/B testing of models and designing APIs for model inferencing at scale will be crucial. You should have proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. As an AI Platform Engineer, you will serve as a subject matter expert in LLM serving paradigms, with in-depth knowledge of GPU architectures. Expertise in distributed training and serving of large language models, along with proficiency in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM, will be required. Demonstrating proven expertise in model fine-tuning and optimization techniques to achieve better latencies and accuracies in model results will be part of your responsibilities. Reducing training and resource requirements for fine-tuning LLM and LVM models will also be essential. Having extensive knowledge of different LLM models and providing insights on their applicability based on use cases is crucial. You should have proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. Your proficiency in DevOps and LLMOps practices, along with knowledge of Kubernetes, Docker, and container orchestration, will be necessary. A deep understanding of LLM orchestration frameworks such as Flowise, Langflow, and Langgraph is also required. In terms of skills, you should be familiar with LLM models like Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, and Llama, as well as LLM Ops tools like ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, and Azure AI. Additionally, knowledge of databases/data warehouse systems like DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, and Google BigQuery, as well as cloud platforms such as AWS, Azure, and GCP, is essential. Proficiency in DevOps tools like Kubernetes, Docker, FluentD, Kibana, Grafana, and Prometheus, along with cloud certifications like AWS Professional Solution Architect and Azure Solutions Architect Expert, will be beneficial. Strong programming skills in Python, SQL, and Javascript are required for this full-time role, with an in-person work location.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Sabre is a technology company that powers the global travel industry. By leveraging next-generation technology, we create global technology solutions that take on the biggest opportunities and solve the most complex challenges in travel. Positioned at the center of the travel industry, we shape the future by offering innovative advancements that pave the way for a more connected and seamless ecosystem. Our solutions power mobile apps, online travel sites, airline and hotel reservation networks, travel agent terminals, and many other platforms, connecting people with moments that matter. Sabre is seeking a talented senior software engineer full Senior Data Science Engineer for SabreMosaic Team. In this role, you will plan, design, develop, and test data science and data engineering software systems or applications for software enhancements and new products based on cloud-based solutions. Role and Responsibilities: - Develop, code, test, and debug new complex data-driven software solutions or enhancements to existing products. - Design, plan, develop, and improve applications using advanced cloud-native technology. - Work on issues requiring in-depth knowledge of organizational objectives and implement strategic policies in selecting methods and techniques. - Encourage high coding standards, best practices, and high-quality output. - Interact regularly with subordinate supervisors, architects, product managers, HR, and others on project or team performance matters. - Provide technical mentorship and cultural/competency-based guidance to teams. - Offer larger business/product context and mentor on specific tech stacks/technologies. Qualifications and Education Requirements: - Minimum 4-6 years of related experience as a full-stack developer. - Expertise in Data Engineering/DW projects with Google Cloud-based solutions. - Designing and developing enterprise data solutions on the GCP cloud platform. - Experience with relational databases and NoSQL databases like Oracle, Spanner, BigQuery, etc. - Expert-level SQL skills for data manipulation, validation, and manipulation. - Experience in designing data modeling, data warehouses, data lakes, and analytics platforms on GCP. - Expertise in designing ETL data pipelines and data processing architectures for Datawarehouse. - Strong experience in designing Star & Snowflake Schemas and knowledge of Dimensional Data Modeling. - Collaboration with data scientists, data teams, and engineering teams using Google Cloud platform for data analysis and data modeling. - Familiarity with integrating datasets from multiple sources for data modeling for analytical and AI/ML models. - Understanding and experience in Pub/Sub, Kafka, Kubernetes, GCP, AWS, Hive, Docker. - Expertise in Java Spring Boot / Python or other programming languages used for Data Engineering and integration projects. - Strong problem-solving and analytical skills. - Exposure to AI/ML, MLOPS, and Vertex AI is an advantage. - Familiarity with DevOps practices like CICD pipeline. - Airline domain experience is a plus. - Excellent spoken and written communication skills. - GCP Cloud Data Engineer Professional certification is a plus. We will carefully consider your application and review your details against the position criteria. Only candidates who meet the minimum criteria for the role will proceed in the selection process.,

Posted 1 month ago

Apply

10.0 - 20.0 years

35 - 40 Lacs

Mumbai

Work from Office

Job Title: Data Science Expert (Mentor & Trainer) Location: Onsite Mumbai, India Employment Type: Full-Time About the Role: We are seeking an experienced and highly skilled Data Science Expert to join our growing team at our Mumbai office. This is a full-time, onsite role focused not only on solving complex data problems but also on mentoring and training Junior Data Science Engineers . The ideal candidate will bring deep technical expertise in data science and machine learning, along with a passion for teaching and developing talent. Key Responsibilities: Lead the development of end-to-end data science solutions using advanced ML, NLP, and Computer Vision techniques. Train, mentor, and support junior data science engineers in coding, model development, and best practices. Architect and implement AI-driven solutions such as chatbots, OCR systems, and facial recognition applications. Translate complex business problems into actionable data science projects and deliver measurable results. Design and lead internal workshops, code reviews, and learning sessions to upskill the team. Collaborate with engineering and product teams to deploy models and insights into production environments. Stay abreast of the latest AI/ML trends and integrate cutting-edge techniques into projects where applicable. Desired Skills & Qualifications: Experience: 6+ years in Data Science/Machine Learning with at least 12 years of team mentoring or leadership experience. Education: Bachelors or Masters degree in Computer Science, Data Science, Statistics, or a related field. Technical Expertise Required: Strong proficiency in Python and SQL ; R is a plus. Solid hands-on experience with Deep Learning and Neural Networks , particularly in Natural Language Processing (NLP) , Generative AI , and Computer Vision . Familiarity with frameworks and libraries such as TensorFlow , Keras , PyTorch , OpenCV , SpaCy , NLTK , BERT , ELMo , etc. Experience developing Chatbots , OCR , and Face Recognition systems is preferred. Hands-on knowledge of cloud platforms (AWS, Azure, or Google Cloud Platform). Experience applying statistical and data mining techniques such as GLM , regression , clustering , random forests , boosting , decision trees , etc. Strong understanding of model validation , performance tuning , and deployment strategies . Soft Skills: Excellent communication and presentation skills, especially in explaining complex models to non-technical audiences. Demonstrated ability to mentor, train, and lead junior team members effectively. Strong analytical and problem-solving mindset with a detail-oriented approach. What We Offer: Competitive salary and benefits A collaborative and intellectually stimulating environment Career growth and leadership development opportunities within a fast-paced team

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world&aposs leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you&aposll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You&aposll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class. Show more Show less

Posted 1 month ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Google Cloud Infrastructure Support Engineer will be responsible for ensuring the reliability, performance, and security of our Google Cloud Platform (GCP) infrastructure. Work closely with cross-functional teams to troubleshoot issues, optimize infrastructure, and implement best practices for cloud architecture. Experience with Terraform for deploying and managing infrastructure templates. Administer BigQuery environments, including managing datasets, access controls, and optimize query performance. Be familiar with Vertex AI for monitoring and managing machine learning model deployments. Knowledge of GCPs Kubernetes Engine and its integration with the cloud ecosystem. Understanding of cloud security best practices and experience with implementing security measures. Knowledge of setting up and managing data clean rooms within BigQuery. Understanding of the Analytics Hub platform and how it integrates with data clean rooms to facilitate sensitive data-sharing use cases. Knowledge of DataPlex and how it integrates with other Google Cloud services such as BigQuery, Dataproc Metastore, and Data Catalog. Key Responsibilities Provide technical support for our Google Cloud Platform infrastructure, including compute, storage, networking, and security services. Monitor system performance and proactively identify and resolve issues to ensure maximum uptime and reliability. Collaborate with cross-functional teams to design, implement, and optimize cloud infrastructure solutions. Automate repetitive tasks and develop scripts to streamline operations and improve efficiency. Document infrastructure configurations, processes, and procedures. Qualifications Required: Strong understanding of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, VPC networking, and IAM. Experience with BigQuery and VertexAI Proficiency in scripting languages such as Python, Bash, or PowerShell. Experience with infrastructure as code tools such as Terraform or Google Deployment Manager. Strong communication and collaboration skills. Bachelor&aposs Degree in Computer Science or related discipline, or the equivalent in education and work experience Preferred Google Cloud certification (e.g., Google Cloud Certified - Professional Cloud Architect, Google Cloud Certified - Professional Cloud DevOps Engineer) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 1 month ago

Apply

7.0 - 11.0 years

7 - 11 Lacs

Bengaluru, Karnataka, India

On-site

About this role: Wells Fargo is seeking a Principal Engineer. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. In this role, you will: Act as an advisor to leadership to develop or influence applications, network, information security, database, operating systems, or web technologies for highly complex business and technical needs across multiple groups Lead the strategy and resolution of highly complex and unique challenges requiring in-depth evaluation across multiple areas or the enterprise, delivering solutions that are long-term, large-scale and require vision, creativity, innovation, advanced analytical and inductive thinking Translate advanced technology experience, an in-depth knowledge of the organizations tactical and strategic business objectives, the enterprise technological environment, the organization structure, and strategic technological opportunities and requirements into technical engineering solutions Provide vision, direction and expertise to leadership on implementing innovative and significant business solutions Maintain knowledge of industry best practices and new technologies and recommends innovations that enhance operations or provide a competitive advantage to the organization Strategically engage with all levels of professionals and managers across the enterprise and serve as an expert advisor to leadership Required Qualifications: 7+ years of Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 5+ years of experience in Cloud Native Application 5+ years of User Interface (UI) experience such as, Angular or React 5+ years of event driven development, microservice architecture, API designs 5+ years of experience using development toolkit including Jenkins, GitHub, and Bitbucket 3+ years working with public cloud platforms (i.e. Google Cloud/Vertex AI, AWS and Azure) and hands on experience with Prompt Engineering, AI/ML frameworks (i.e. TensorFlow, Pytorch) Knowledge and understanding of relational and non-relational databases, such as PostgreSQL, SQL Server, MongoDB, Neo4J etc Role: Head - Engineering Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

maharashtra

On-site

You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. We are seeking an experienced and highly skilled Senior Google Cloud Analytics & Vertex AI Specialist for the position of Associate Director with 12-15 years of experience, specifically focusing on Google Vertex AI. The ideal candidate will have a deep understanding of Google Cloud Platform (GCP) and extensive hands-on experience with Google Cloud analytics services and Vertex AI. The role involves leading projects, designing scalable data solutions, driving the adoption of AI and machine learning practices within the organization, and supporting pre-sales activities. A minimum of 2 years of hands-on experience with Vertex AI is required. Key Responsibilities: - Architect and Implement: Design and implement end-to-end data analytics solutions using Google Cloud services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. - Vertex AI Development: Develop, train, and deploy machine learning models using Vertex AI. Utilize Vertex AI's integrated tools for model monitoring, versioning, and CI/CD pipelines. Implement custom machine learning pipelines using Vertex AI Pipelines. Utilize Vertex AI Feature Store for feature management and Vertex AI Model Registry for model tracking. - Data Integration: Integrate data from various sources, ensuring data quality and consistency across different systems. - Performance Optimization: Optimize data pipelines and analytics processes for maximum efficiency and performance. - Leadership and Mentorship: Lead and mentor a team of data engineers and data scientists, providing guidance and support on best practices in GCP and AI/ML. - Collaboration: Work closely with stakeholders to understand business requirements and translate them into technical solutions. - Innovation: Stay updated with the latest trends and advancements in Google Cloud services and AI technologies, advocating for their adoption when beneficial. - Pre-Sales Support: Collaborate cross-functionally to understand client requirements, design tailored solutions, prepare and deliver technical presentations and product demonstrations, and assist in proposal and RFP responses. - Project Delivery: Manage and oversee the delivery of data analytics and AI/ML projects, ensuring timely and within budget completion while coordinating with cross-functional teams. Qualifications: - Experience: 12-15 years in data engineering, data analytics, and AI/ML with a focus on Google Cloud Platform. - Technical Skills: Proficient in Google Cloud services (BigQuery, Dataflow, Pub/Sub, Cloud Storage, Vertex AI), strong programming skills in Python and SQL, experience with machine learning frameworks (TensorFlow, PyTorch), data visualization tools (Looker, Data Studio). - Pre-Sales and Delivery Skills: Experience in supporting pre-sales activities, managing and delivering complex data analytics and AI/ML projects. - Certifications: Google Cloud Professional Data Engineer or Professional Machine Learning Engineer certification is a plus. - Soft Skills: Excellent problem-solving, communication, and leadership skills. Qualifications: - B.E./B.Tech/Post Graduate,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

delhi

On-site

We are seeking a talented Systems Architect (AVP level) with specialized knowledge in designing and expanding Generative AI solutions for production environments. In this pivotal position, you will collaborate across various teams including data scientists, ML engineers, and product leaders to mold enterprise-level GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI, architecting data pipelines, training/inference workflows, and integrating MLOps. You will be tasked with ensuring that systems are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should possess 12-15 years of experience in cloud-native and distributed systems, with 2-3 years focusing on GenAI/LLMs utilizing tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is essential. Experience with RAG, semantic search, agent orchestration, and MLOps is highly valued. Strong architectural acumen, effective stakeholder communication skills, and preferred certifications in cloud technologies, AI open-source contributions, and knowledge of security and governance are all advantageous.,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

delhi

On-site

We are looking for a Systems Architect (AVP level) with extensive experience in designing and scaling Generative AI solutions for production. As a Systems Architect, you will play a crucial role in collaborating with data scientists, ML engineers, and product leaders to shape enterprise-grade GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI. You will also be responsible for architecting data pipelines, training/inference workflows, and MLOps integration. It is essential to ensure that the systems you design are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should have 12-15 years of experience in cloud-native and distributed systems, with at least 2-3 years of experience in GenAI/LLMs using tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is required. Experience with technologies like RAG, semantic search, agent orchestration, and MLOps will be beneficial for this role. Strong architectural thinking and effective communication with stakeholders are essential skills. Preferred qualifications include cloud certifications, AI open-source contributions, and knowledge of security and governance principles. If you are passionate about designing cutting-edge Generative AI solutions and possess the necessary skills and experience, we encourage you to apply for this leadership role.,

Posted 1 month ago

Apply

10.0 - 18.0 years

20 - 35 Lacs

Pune

Work from Office

Role & responsibilities 1.Develop and execute a strategic roadmap for executing Applied AI engineering projects with a product engineering mindset . 2. Lead and inspire a high-performance core group of developers, architects & engineering managers. 3. Responsible for hiring, training, coaching, team building, assessing performance, providing feedback, mentoring, and helping the team succeed. Our enthusiastic Data Scientists are just getting started -- and as a manager, you guide the way and set up examples by coming up with the best solutions to complex engineering problems. 4. Collaborate with architects, product management, and other engineering teams to create solutions that increase the platform's value. 5. Own Delivery - This includes overseeing the end-to-end delivery of Applied AI projects, ensuring adherence to timelines, budget, and impeccable engineering quality, setting up and managing periodic progress meeting,s and act as a point of escalation for project issues, providing timely resolution and effective communication with stakeholders. 6. Participate with senior management in developing a long-term technology road map. Influence, collaborate and communicate effectively with various leaders. Education and Qualification 1.Proven experience of 10+ years in a leadership role overseeing Applied AI engineering teams, preferably within a professional services environment. 2. Successful track-record of Solution Architecting and High-end AI based software development. 3. Strong technical background with expertise in Generative AI, Google AI APIs, Machine Learning, software development methodologies, languages, and frameworks. 4. Excellent communication skills with the ability to articulate technical concepts to both technical and non-technical audiences. 5. Demonstrated success in delivering complex software projects on time and within budget. 6. Ability to thrive in a fast-paced, dynamic environment and adapt to evolving business needs. 7. Leadership qualities including strategic thinking, decision-making, problem-solving, and team-building. 8. Experience in client-facing roles with a focus on building and maintaining relationships. 9. Our technology stack is insane and the role requires experience and understanding of technologies like: a. Python, Pyspark , TensorFlow b. Databases: MongoDB, Google Cloud SQL, Graph/NoSQL, Bigtable c. Google App Engine, GKE, Vertex A I , Doc AI, Conversational AI

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At Allstate, great things happen when our people work together to protect families and their belongings from life's uncertainties. For over 90 years, our innovative drive has kept us a step ahead of our customers" evolving needs. From advocating for safety measures like seat belts and airbags to being a leader in pricing sophistication, telematics, and device and identity protection. This role is responsible for leading the use of data to make decisions. You will develop and execute new machine learning predictive modeling algorithms, code tools using machine learning/predictive modeling for business decisions, integrate new data to improve modeling results, and find solutions to business problems through machine learning/predictive modeling. In addition, you will manage projects of small to medium complexity. We are seeking a Data Scientist to apply machine learning and advanced analytics to solve complex business problems. The ideal candidate will possess technical expertise, business acumen, and a passion for solving high-impact problems. Your responsibilities will include developing machine learning models, integrating new data sources, and delivering solutions that enhance decision-making. You will collaborate with cross-functional teams to translate insights into action, from design to deployment. Key Responsibilities: - Design, build, and validate statistical and machine learning models for key business problems. - Perform data exploration and analysis to uncover insights and improve model performance. - Communicate findings to stakeholders, collaborate with teams to ensure solutions are adopted. - Stay updated on modeling techniques, tools, and technologies, integrating innovative approaches. - Lead data science initiatives from planning to delivery, ensuring measurable business impact. - Provide mentorship to junior team members and lead technical teams as required. Must-Have Skills: - 4 to 8 years of experience in applied data science, delivering business value through machine learning. - Proficiency in Python with experience in libraries like scikit-learn, pandas, NumPy, and TensorFlow or PyTorch. - Strong foundation in statistical analysis, regression modeling, classification techniques, and more. - Hands-on experience with building and deploying machine learning models in cloud environments. - Ability to translate complex business problems into structured data science problems. - Strong communication, stakeholder management, analytical, and problem-solving skills. - Proactive in identifying opportunities for data-driven decision-making. - Experience in Agile or Scrum-based project environments. Preferred Skills: - Experience with Large Language Models (LLMs) and transformer architectures. - Experience with production-grade ML platforms and orchestration tools for scaling models. Primary Skills: Business Case Analyses, Data Analytics, Predictive Analytics, Predictive Modeling, Waterfall Project Management. Shift Time: Shift B (India). Recruiter Info: Annapurna Jha, email: ajhat@allstate.com. About Allstate: The Allstate Corporation is a leading insurance provider in the US, with operations in multiple countries, including India. Allstate India is a strategic business services arm focusing on technology, innovation, and operational excellence. Learn more about Allstate India here.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies