Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 8 Lacs
bengaluru
Work from Office
About The Role What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills BASIC QUALIFICATIONS for Data Engineering Manager / Software Development Manager 10+ years of engineering experience most of which is in Data domain 5+ years of engineering team management experience 10+ years of planning, designing, developing and delivering consumer software experience - Experience partnering with product or program management teams 5+ years of experience in managing data engineer, business intelligence engineers and/or data scientists Experience designing or architecting (design patterns, reliability and scaling) of new and existing systems Experience managing multiple concurrent programs, projects and development teams in an Agile environment Strong understanding of Data Platform, Data Engineering and Data Governance Experience partnering with product and program management teams - Experience designing and developing large scale, high-traffic applications PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
2.0 - 7.0 years
30 - 35 Lacs
pune
Work from Office
At NiCE, we don t limit our challenges. We challenge our limits. Always. We re ambitious. We re game changers. And we play to win. We set the highest standards and execute beyond them. And if you re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what s the role all about A Software Engineer in Data Platforms, Datalake and snowflake holds a prominent position in software development teams, responsible for designing, developing, and implementing complex data solutions that leverage both frontend and backend technologies. Heres an overview of the key responsibilities and qualifications for this role: How will you make an impact Maintain quality, ensure responsiveness, and help optimize new and existing systems. Collaborate with the rest of the engineering team to design and build new features on time and to budget. Maintain code integrity and organization. Understanding and implementation of security and data protection. Understanding of the Business Change cycle from inception to implementation, including the organization of Change initiative Ability to coordinate build and release activities with key stakeholders. Have you got what it takes Bachelor s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 3+ year experience in software development Hands-on with Snowflake Cloud Data Platform (minimum 2 years). Snowflake certifications is preferred. Strong experience in SQL Hands on experience on Python/Go. Understanding of Snowflake administration (roles, RBAC, warehouses). Experience with Snowflake advanced utilities, Time Travel & Fail-safe, resource monitors, alerts, and cost optimizations. Knowledge of snowflake data governance and security policies. Experience building Airflow DAGs and managing job dependencies. Experience with AWS Technology including (S3, SQS, Lambdas, Firehose, IAM, CloudWatch, etc) Excellent knowledge on Airflow. Knowledge of snowflake cloud storage integration (AWS S3 / GCS / Azure Blob) Experience in CI/CD, git, github Actions, Jenkins based pipeline deployments. Working knowledge of unit testing What s in it for you Join an ever-growing, market disrupting, global company where the teams comprised of the best of the best work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8574 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.
Posted 1 week ago
5.0 - 7.0 years
4 - 7 Lacs
bengaluru
Work from Office
Location: India Duration: 3 months (Contract with possible extension) Number of Positions: 1 Project Overview: Need Data Quality Experts with strong Collibra expertise to monitor, analyze, and enhance data transfer quality across the organization. This role will focus on leveraging Collibra s platform capabilities for Data Cataloging, Data Governance, and Data Quality & Observability (DQO), alongside hands-on data quality improvement initiatives. Key Responsibilities: Configure and manage Collibra Data Catalog , Data Governance , and DQO modules. Define and implement data quality rules , workflows, and policies. Perform data profiling, cleansing, duplicate detection, and anomaly detection . Integrate Collibra with enterprise data systems via REST APIs. Develop data quality dashboards and reports using Power BI or Tableau . Collaborate with business and technical teams to ensure compliance with governance standards. Required Skills & Experience: 5+ years of experience in Data Quality, Data Governance, or Data Management . Proven hands-on experience with Collibra platform (Data Catalog, Governance, DQO). Strong knowledge of data profiling, cleansing, and quality metrics . Proficiency in SQL for data analysis and validation. Experience integrating Collibra APIs into enterprise workflows. Familiarity with BI/visualization tools such as Power BI or Tableau . Nice-to-Have: Experience in large-scale retail or e-commerce data environments. Knowledge of data privacy and compliance regulations (GDPR, CCPA).
Posted 1 week ago
3.0 - 6.0 years
6 - 7 Lacs
hyderabad
Work from Office
Job Title: Databricks Data Engineer Experience Level: 3 to 6 Years Location: Ahmedabad Employment Type: Full-Time Certifications Required: Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional Cloud Certifications (Preferred): Azure, AWS, GCP Job Summary: We are seeking a highly skilled and certified Databricks Data Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience in implementing Lakehouse architectures, upgrading to Unity Catalog, and building robust data ingestion pipelines. This role demands proficiency in Python, PySpark, SQL, and Scala, along with a strong understanding of big data technologies, streaming workflows, and multi-cloud environments. Key Responsibilities: Lakehouse Implementation: Design and implement scalable Lakehouse architecture using Databricks. Optimize data storage and retrieval strategies for performance and cost-efficiency. Unity Catalog Upgrade: Lead and execute Unity Catalog upgrades across Databricks workspaces. Ensure secure and compliant data governance and access control. Data Ingestion & Migration: Develop and maintain data ingestion pipelines from various sources (structured, semi-structured, unstructured). Perform large-scale data migrations across cloud platforms and environments. Pipeline Development: Build and manage ETL/ELT pipelines using PySpark and SQL. Ensure data quality, reliability, and performance across workflows. Big Data Streaming & Workflows: Implement real-time data streaming solutions using Spark Structured Streaming or similar technologies. Design workflow orchestration using tools like Airflow, Databricks Workflows, or equivalent. Multi-Cloud Expertise (Preferred): Work across Azure, AWS, and GCP environments. Understand cloud-native services and integration patterns for data engineering. Collaboration & Documentation: Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Document technical designs, data flows, and operational procedures. Required Skills & Qualifications: 3 6 years of hands-on experience in data engineering roles. Strong expertise in Databricks platform and ecosystem. Proficiency in Python, PySpark, SQL, and Scala. Experience with Lakehouse architecture and Unity Catalog. Proven track record in building scalable data ingestion pipelines and performing data migrations. Familiarity with big data technologies such as Delta Lake, Apache Spark, Kafka, etc. Understanding of data lake concepts and best practices. Experience with streaming data and workflow orchestration. Certified as Databricks Data Engineer Associate and Professional (mandatory). Cloud certifications in Azure, AWS, or GCP are a strong plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Nice to Have: Experience with CI/CD pipelines and DevOps practices in data engineering. Exposure to data cataloging tools and metadata management. Knowledge of data security, privacy, and compliance standards. Why Join Us Work on cutting-edge data engineering projects in a fast-paced environment. Collaborate with a team of passionate professionals. Opportunity to grow and expand your skills across multiple cloud platforms. Competitive compensation and benefits.
Posted 1 week ago
5.0 - 7.0 years
8 Lacs
bengaluru
Work from Office
Job Description: Job Description Data & AI Ops Engineer (MLOps Engineer) Role Overview We are looking for an experienced Data & AI Ops Engineer with expertise in MLOps, Data Engineering, and ML deployment pipelines. The role involves designing, automating, and optimizing end-to-end ML workflows from data preparation to model training, deployment, monitoring, and lifecycle management across Azure ML, AWS SageMaker, and Google Vertex AI. The ideal candidate will bring strong skills in Python, PySpark, SQL, CI/CD, and containerization, along with hands-on experience in model serving, monitoring, and optimization. Key Responsibilities ML Development & Deployment Implement and manage end-to-end ML pipelines using Azure ML Pipelines, Kubeflow Pipelines, and MLflow. Support model development with scikit-learn, TensorFlow, and PyTorch, including training, tuning, and serialization (pickle, ONNX, TorchScript). Deploy models into production using Docker, Azure ML, AWS SageMaker, and Vertex AI with scalable serving frameworks. MLOps & Automation Develop CI/CD pipelines for ML workflows using GitHub Actions, MLflow CI/CD integrations, and container registries. Implement continuous training (CT), continuous integration (CI), and continuous delivery (CD) practices for ML systems. Automate data ingestion, preprocessing, and feature pipelines with PySpark and SQL. Model Monitoring & Optimization Monitor model performance, drift, and data quality in production environments. Implement logging, alerting, and observability for ML models and pipelines. Optimize inference performance with ONNX, TorchScript, and TensorRT (optional). Collaboration & Governance Partner with Data Scientists, Data Engineers, and DevOps teams to integrate ML models into business workflows. Ensure compliance with data governance, security, and regulatory policies. Contribute to the standardization of MLOps frameworks, best practices, and reusable components. Required Skills & Qualifications 7+ years of experience in Data/AI Engineering, with 3 5 years in MLOps. Strong programming skills in Python, PySpark, SQL. Expertise in ML frameworks: scikit-learn, TensorFlow, PyTorch. Experience with model serialization formats (pickle, ONNX, TorchScript). Hands-on with CI/CD tools: GitHub Actions, MLflow CI/CD, Kubeflow Pipelines, Azure ML Pipelines. Experience deploying ML models on Azure ML, AWS SageMaker, and Vertex AI. Proficiency in Docker and containerized deployments. Preferred Skills Familiarity with Kubernetes for scaling ML workloads. Experience with feature stores and monitoring tools (Feast, WhyLabs, Evidently AI, Prometheus, Grafana). Knowledge of data governance and compliance (GDPR, HIPAA, etc.). Exposure to large-scale distributed systems and real-time inference. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 1 week ago
3.0 - 7.0 years
4 - 5 Lacs
mumbai, navi mumbai
Work from Office
About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Required Skills and Compete ncies: - MS Fabric Design and implement data solutions using Microsoft Fabric components (Power BI, Synapse, Data Engineering, Data Factory, Data Science, and Data Activator). Develop robust ETL/ELT pipelines leveraging Data Factory and Synapse Pipelines. Build and manage lakehouses, data warehouses, and datasets in OneLake. Create insightful and interactive Power BI reports and dashboards to meet business needs. Collaborate with data scientists, analysts, and business stakeholders to deliver integrated data solutions. Optimize performance of data pipelines and reporting solutions. Implement best practices for data governance, security, and compliance in the Microsoft Fabric ecosystem. Stay updated on the latest Fabric features and suggest enhancements or adoptions where applicable. Detail - 10+ years of experience in Azure Data Services, Data Architecture, and Cloud Infrastructure. 3+ Years exp in MS Fabric Design and implement data solutions using Microsoft Fabric components (Power BI, Synapse, Data Engineering, Data Factory, Data Science, and Data Activator). Expertise in Microsoft Purview, Data Governance, and Security frameworks. Experience in performance tuning on Microsoft Fabric & Azure. Proficiency in SQL, Python, PySpark, and Power BI for data engineering and analytics. Experience in DevOps for Data (CI/CD, Terraform, Bicep, GitHub Actions, ARM Templates). Strong problem-solving and troubleshooting skills in Azure/Fabric & Data Services. Data Flows: Design and technical skill data flows within the Microsoft Fabric environment. Storage Strategies: Implement OneLake storage strategies. Analytics Configuration: Configure Synapse Analytics workspaces. Migration: Experience in potential migration from their existing data platforms like Databricks/Spark, etc to Microsoft Fabric Integration Patterns: Establish Power BI integration patterns. Data Integration: Architect data integration patterns between systems using Azure Databricks/Spark and Microsoft Fabric. Delta Lake Architecture: Design Delta Lake architecture and implement medallion architecture (Bronze/Silver/Gold layers). Real-Time Data Ingestion: Create real-time data ingestion patterns and establish data quality frameworks. Data Governance: Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance. Security: Implement row-level security, data masking, and audit logging mechanisms. Pipeline Development: Design and implement scalable data pipelines using Azure Databricks/Spark for ETL/ELT processes and real-time data integration. Performance Optimization: Implement performance tuning strategies for large-scale data processing and analytics workloads. Analytical Skills: Strong analytical and problem-solving skills. Communication: Excellent communication and teamwork skills. Certifications: Relevant certifications in Microsoft data platforms are a plus. Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.
Posted 1 week ago
3.0 - 7.0 years
15 - 20 Lacs
mumbai
Work from Office
Job Title: AI Architect Location: Hyd/Mum/Bang Exp Level: 15+ Education: Any Dgree Job Summary: Scope out, design and oversee the architecture of AI and machine learning solutions across products and platforms for our clients Collaborate with team members to translate business requirements into AI-enabled solutions at either a Roadmap, Proof of Value, Full Implementation or Managed Service support capability and service offering. Aid in the selection of appropriate tools, frameworks, and technologies for AI development in the Google (GCP) cloud using related technologies. Create reusable and scalable AI models and pipelines for our clients thereby ensuring performance, reliability, and security. Develop and enforce best practices for model governance, MLOps, and responsible AI. Evaluate and integrate third-party AI services and APIs when needed. Drive innovation by staying current with AI trends and emerging technologies (e.g., LLMs, edge AI). Required Qualifications: Bachelors or Master s degree in Computer Science, Data Science, AI, or related field. 7+ years of experience in software development, with at least 3 years in AI/ML architecture roles. Strong proficiency in machine learning, deep learning, data engineering, and distributed systems. Experience with Google cloud-based AI platforms such as Google Vertex AI. Proficiency in Python and experience with ML libraries. Solid understanding of AI data governance, model explainability, and ethical AI principles. Excellent communication skills to work with both technical and non-technical stakeholders. Demonstrated experience delivering scalable AI projects that drive business value. Preferred Skills: Experience deploying generative AI models and LLMs leveraging RAG models (e.g., GPT, Google, Claude, open-source models). Familiarity with containerization (Docker, Kubernetes) and CI/CD for ML. Knowledge of NLP, computer vision, or reinforcement learning. Hands-on development and support of AI solutions. Solid client communication skills.
Posted 1 week ago
7.0 - 9.0 years
30 - 35 Lacs
chennai
Work from Office
Job Description We are looking for a highly skilled Lead Data Analyst with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python. Key Responsibilities: Design, develop, and maintain scalable data warehouse solutions. Write and optimize complex SQL queries for data extraction, transformation, and reporting. Develop and automate data pipelines using Python. Work with AWS cloud services for data storage, processing, and analytics. Collaborate with cross-functional teams to provide data-driven insights and solutions. Ensure data integrity, security, and performance optimization Qualifications 7- 9 years of experience in Data Warehousing & Analytics. Strong proficiency in writing complex SQL queries with deep understanding of queryoptimization, stored procedures, and indexing. Hands-on expe
Posted 1 week ago
5.0 - 12.0 years
30 - 35 Lacs
bengaluru
Work from Office
Job Description Overview The media industry is at a pivotal moment where generative AI is transforming how content is created and experienced. We are seeking a dynamic, forward-thinking Director of Product Management to lead our AI-driven product initiatives and shape the future of data products at Nielsen . In this high-impact role, you will own all aspects of the product lifecycle from long-term vision and strategy through development, go-to-market execution, and continuous innovation championing AI-first capabilities that give our company a cutting-edge advantage. You will champion a multi-year strategy and product roadmap for the core data platform focusing on its evolution to a modern data lakehouse architecture. Starting as an individual contributor, you will lay the groundwork for our Generative AI product function and act as an internal evangelist for AI innovation. As the team grows, you will mentor and coach other product managers, scaling your impact and fostering a culture of creativity and collaboration. Key Responsibilities Define the product roadmap for AI features and ensure it stays ahead of industry trends. Define the vision for how our data assets will be organized, governed, and accessed to fuel generative AI development at scale. Define the requirements for an AI-ready data system, including data quality, lineage, and bias mitigation, to ensure our models are trained on trustworthy and representative data. Develop and drive a bold product vision and multi-year strategy for generative AI in media. Identify new AI-driven product opportunities and align them with our business goals and the evolving needs of content consumers and creators. Lead the entire product development lifecycle, from ideation and prototyping to launch and iteration. Work closely with engineering and design teams to build AI-powered media products that are innovative, user-centric, and high-quality. Ensure on-time delivery of new features and refine the product based on user feedback and data insights. Partner with cross-functional teams, including Marketing and Sales, to craft go-to-market (GTM) strategies for AI features and products. Define value propositions and positioning for our generative AI capabilities, and support sales enablement. Champion an AI-first mindset across the organization. Act as the internal evangelist for generative AI capabilities, educating and inspiring teams to rethink their product from an AI-first mentality. Collaborate closely with data engineering and data science teams to architect and build the data lakehouse. You will serve as the product owner for key data platform initiatives, ensuring the infrastructure meets the specific needs of AI model training, fine-tuning, and real-time inference. Stay informed on the latest developments in AI and the media industry. Monitor emerging generative AI technologies (such as new LLMs, image/video generation models, etc.) and analyze competitor products and market trends. You will anticipate how AI advancements and media consumption trends shape user expectations and proactively adapt our strategy. Operate as an individual contributor, demonstrating leadership in mentoring colleagues on the strategic importance of a well-architected data foundation. You will guide product and engineering teams on best practices for leveraging the data lakehouse to accelerate AI product development. Qualifications 10-12+ years of product management experience, including ownership of product strategy and launches. 7+ years experience in product management or a similar discipline in a SaaS or DaaS environment Pr
Posted 1 week ago
2.0 - 7.0 years
30 - 35 Lacs
gurugram
Work from Office
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you ll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Join #TeamAmex and let s lead the way together. How we serve our customers is constantly evolving and is a challenge we gladly accept. Whether you re finding new ways to prevent identity fraud or enabling customers to start a new business, you can work with one of the most valuable data sets in the world to identify insights and actions that can have a meaningful impact on our customers and our business. And, with opportunities to learn from leaders who have defined the course of our industry, you can grow your career and define your own path. Find your place in risk and analytics on #TeamAmex. The Platforms and Capabilities team within Global Risk and Compliance (GRC) is responsible for building and implementing leading-edge platforms and solutions for risk management. Our vision is to provide best-in-class Platforms and Capabilities that enable the risk management framework in GRC and across the Company and empower colleagues to excel at risk management activities. We are seeking a Manager Digital Product Manager to lead the design and delivery of enterprise reporting solutions across the Integrated Risk Management platform at American Express. This role focuses on enabling enterprise-wide regulatory and risk reporting through scalable, well-governed data solutions. You will own the roadmap for a set of reporting and insight-generation capabilities that support compliance with regulatory expectations (e.g., BCBS 239), internal governance processes, and operational decision-making. You will collaborate across business, data governance, engineering, and platform teams to define data products and controls, ensure alignment to enterprise standards, and embed best practices across development cycles. You will lead cross-functional efforts to implement metadata management, issue remediation workflows, and access governance capabilities. This role offers the opportunity to shape foundational elements of how risk data is managed and consumed across American Express. Key Responsibilities: Define and drive delivery of platform features that enhance risk data governance and lineage tracking. Collaborate with federated data owners, stewards, and engineers to standardize metadata and access controls. Develop product strategy for data traceability, issue tracking, and validation workflows. Ensure alignment of IRM platform capabilities with enterprise data governance frameworks. Drive coordination across Agile teams and business users for implementation and adoption of governance tools. Define data-related KPIs, SLAs, and performance measures to track product impact. Conduct stakeholder sessions to gather requirements and communicate platform vision. Partner with audit and regulatory exam teams to ensure data controls are compliant and auditable. Minimum Qualifications: Bachelors or masters degree in business Analytics, Information Systems, Finance, or related discipline. 4+ years of digital product experience, including at least 2 years in data-focused roles. A deep understanding of regulatory and internal risk reporting requirements. Experience working with BI/reporting tools like Power BI, Tableau, or similar platforms. Ability to document requirements and manage product delivery in Agile environments. Experience partnering with engineering, data governance, and business leadership teams. Familiarity with data validation, lineage, and reconciliation best practices. Effective communicator, able to lead conversations with senior-level stakeholders. Preferred Qualifications: Background in regulatory reporting (e.g., Basel, CCAR, SOX, or FDIC compliance). Experience implementing ServiceNow Performance Analytics or reporting modules. Familiarity with financial risk data aggregation standards (e.g., BCBS 239). Agile certification (Scrum Master, CSPO) or data product specialization (CDMP, DGSP). American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
8.0 - 13.0 years
35 - 40 Lacs
pune
Work from Office
Corporate Title: Vice President As one of the worlds leading asset management firms, data is at the heart of our operations. Were looking for a Data Governance Technology Lead to spearhead the adoption and operation of modern data governance tooling across the entire organization. The role is part of DWSs Data Platform Engineering organization. Data Platform Engineering builds and operates our critical enterprise data ecosystem to ensure high-quality, secure and compliant data flows across the organization. This is a highly technical leadership position that combines strategy, hands-on engineering, and team management. You will closely collaborate with the Head of Data Platforms as well as Data Architecture, and the Chief Data Office Your key responsibilities Support the continuous refinement of DWSs data governance vision and help defining the technical roadmap and backlog to bring this vision to reality Lead the engineering team responsible for designing, developing and operating DWSs data governance and data observability stack Support your team in meeting delivery timelines and adhering to quality standards Drive the end-to-end implementation of Collibra, including workflow automation, metadata management, lineage capture and more Work closely with the data stewards, data owners, architecture, and CDO teams to ensure a consistent approach to data governance and quality management Help defining and monitoring KPIs to ensure enterprise-wide adoption of DG tooling and core data standards Your skills and experience 7+ years of experience with data governance tooling and related concepts such as meta-data management, data lineage, and DQ monitoring Strong software engineering background, with experience in Python, Java, or similar programming languages Deep understanding of SDLC concepts, automation and efficient tech delivery Good understanding of fundamental data architecture concepts Hands-on experience with the Collibra Suite (certifications would be a plus) Proficiency with modern data platforms such as Snowflake or GCP BigQuery Strong leadership experience, including managing technical teams and cross-functional projects
Posted 1 week ago
5.0 - 9.0 years
12 - 16 Lacs
noida
Work from Office
Key Responsibilities Design, build, and maintain a robust data platform on Databricks. Develop efficient ETL/ELT pipelines to ingest data from SQL Server, MongoDB, and InfluxDB into Databricks Delta Lake. Implement real-time and batch data ingestion strategies using tools such as Kafka, Azure Event Hubs, or similar. Optimize data storage and processing for performance, scalability, and cost-effectiveness. Build data models to support BI, advanced analytics, and machine learning use cases. Collaborate with stakeholders (Data Scientists, Analysts, Product Teams) to define data requirements. Ensure data quality, governance, and security across the platform. Monitor, troubleshoot, and enhance data workflows for reliability. Required Skills & Qualifications Proven experience in Databricks (Delta Lake, Spark, PySpark, SQL). Strong expertise in data integration from multiple sources (SQL Server, MongoDB, InfluxDB). Hands-on experience with ETL/ELT pipeline development and orchestration (e.g., Airflow, ADF, or equivalent). Proficiency in data modeling, data warehousing concepts, and performance tuning. Familiarity with real-time data streaming (Kafka, Azure Event Hubs, or similar). Strong programming skills in Python and SQL. Experience with cloud platforms (Azure). Excellent problem-solving skills and the ability to work in a fast-paced environment. Familiar with Change Data Capture Preferred Qualifications Experience with InfluxDB or time-series data handling. Exposure to machine learning workflows within Databricks. Knowledge of data governance frameworks (Unity Catalog, Purview, or similar).
Posted 1 week ago
7.0 - 12.0 years
16 - 20 Lacs
hyderabad
Work from Office
Role Overview : We are seeking a hands-on AI/ML Lead who will design, develop, and deploy AI-driven lending automation solutions with a focus on NLP workflows, GenAI orchestration, and valuation process automation. This is a 50% coding and delivery role, 50% architecture and leadership role. You will work directly with our Engineering Lead, Data Engineering Team, Product Managers, and UK-based stakeholders to define, architect, and integrate AI capabilities seamlessly into our Node.js + React enterprise platform. Key Responsibilities : AI Solution Architecture & Development : - Architect and implement end-to-end AI/ML features aligned with lending workflows : 1. NLP Document Parsing OCR, entity extraction, semantic search, summarization, classification. 2. Valuation Automation Predictive modeling for asset and loan valuations. 3. GenAI Orchestration Multi-step, multi-service automation workflows. - Develop production-grade AI services in Python, integrating with REST/GraphQL APIs, microservices, and event-driven architectures. - Integrate GenAI APIs (OpenAI GPT, Anthropic Claude, AWS Comprehend, Google Vertex AI, etc.) into existing systems. Model Lifecycle Management : - Select, train, fine-tune, and deploy models (LLMs, transformer-based models, classical ML). - Implement model monitoring pipelines for accuracy, drift detection, bias evaluation, and retraining triggers. - Optimize inference latency, throughput, and scalability for production workloads. Data Readiness & Governance : - Collaborate with Data Engineering to ensure AI-ready data pipelines (schema design, storage format, vectorization strategies). - Establish data labeling, augmentation, and versioning processes for supervised and semi-supervised learning. - Ensure compliance with data privacy regulations (GDPR, RBI guidelines) and ethical AI principles. AI Workflow Orchestration : - Design and implement multi-step AI orchestration layers combining LLM prompts, RAG (Retrieval-Augmented Generation), and business rules. - Build custom prompt chains and tools to handle complex workflows like Credit Committee Pack creation and communication parsing. Stakeholder Collaboration & Leadership : - Translate complex AI concepts into clear business benefits for non-technical stakeholders. - Mentor and guide developers on AI integration best practices. - Track AI feature KPIs and demonstrate measurable business impact. Required Skills & Experience : - Total Experience : 7 - 12 years in software engineering, with 3+ years hands-on AI/ML solution delivery. - Proven record of deploying AI/NLP features in production environments. - Proficiency in Python (FastAPI, Flask, LangChain, Hugging Face Transformers, PyTorch/TensorFlow). - Strong experience with NLP pipelines tokenization, embeddings, semantic search, summarization, classification, sentiment analysis. - Expertise in AI orchestration frameworks (LangChain, Haystack, LlamaIndex) and workflow automation. - Proficient in REST API design, microservices, and integrating AI with JavaScript/TypeScript-based backends. - Deep understanding of data structures, feature engineering, and vector databases (Pinecone, Weaviate, FAISS). - Solid grasp of MLOps tools (MLflow, Kubeflow, AWS SageMaker, Azure ML). - Familiarity with cloud-native AI deployments (AWS, Azure, GCP). - Strong communication skills for technical-to-business translation. Bonus Skills : - Fintech or lending automation platform experience. - Familiarity with financial document workflows (KYC, underwriting, valuation reports). - Hands-on experience with RAG pipelines, prompt engineering, and custom LLM training.
Posted 1 week ago
5.0 - 10.0 years
8 - 12 Lacs
bengaluru
Work from Office
YOUR IMPACT : As a Sr. Data Specialist, part of the Master Data Services (MDS) team with IT Shared Services, you will help lead both the operational and strategic aspects of master data management. You will be part of a global team, that ensures and governs key master data within OpenText is maintained, including Customer, Vendor, Material and financial data. With your expertise and passion for master data governance, you'll help lead quality improvements and policies to ensure the protection of sensitive data and information assets. Your role will focus on ensuring adherence to an enterprise-wide master data governance framework, covering master data policies, standards, and practices across both business and functional areas. This will be critical in achieving the necessary consistency, quality, and protection of master data assets to align with our business goals. WHAT THE ROLE OFFERS : Following a master data governance framework focusing on data quality and sensitive data protection. Responsible for creating and maintaining master data such as Customer, Vendor, Materials and Finance data such as GLs and Cost Centres in our ERP - SAP. Participate in meetings with the Master Data Services Working Group in Functional and Technology areas to align master data protection requirements with operational planning. Provide insight into critical business decisions around master data, working closely with the senior teams to answer and solve for pressing issues. Assist in the creation of data quality and protection standards across the organization and lead their adoption. Assist in defining performance indicators and metrics, ensuring compliance with master data governance policies and roles. Gathering, structuring, and analysing data; providing recommendations by presenting to management, process owners and business partners. Participate in large scale projects ensuring data governance is adhered to during the project. Responsible for initiatives to cleanse existing master data sets to drive improvements in overall accuracy, completeness, and quality. Participate in Divestitures, Mergers and Acquisitions for migrated master data, Customer, Vendor, Materials and Finance. Daily operations of workflows and or tickets on complex or urgent master data requests. Global Month end & quarter end support which involves shift work. WHAT YOU NEED TO SUCCEED : At least 5+ years of experience in a major technology organization, with expertise in large-scale data management, operations, and governance. Good knowledge of industry-leading data quality and data protection management practices. Experience with master data governance practices and understanding of business and technology issues related to managing enterprise information assets and data protection. Proven consulting skills, including change management strategies, communication, culture change, and performance measurement system design. 3+ years of experience working with SAP or similar ERP systems for managing master data. 3+ years of experience working with a ticketing system and escalations. 3+ years of experience in development of standard data analyses, reports, and dashboards. Bachelors or masters degree in computer science, MIS, or Information Management. Experience using Excel to perform data operations, e.g. working with data types, v-lookups, pivot tables, etc. Experience with Data Quality reporting, Business Intelligence, and AI would be beneficial.
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Stibo Product Master Data Management Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business objectives and user needs, while maintaining a focus on quality and efficiency throughout the project lifecycle. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in Stibo Product Master Data Management.- Strong understanding of data governance principles and practices.- Experience with application design and architecture.- Ability to analyze and optimize application performance.- Familiarity with integration techniques and tools. Additional Information:- The candidate should have minimum 7.5 years of experience in Stibo Product Master Data Management.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
7.0 - 12.0 years
14 - 19 Lacs
hyderabad
Work from Office
Overview Apply best practices for data quality & triaging ( with goal to reduce data downtimes) on existing and new pipelines for our business, customer and Data Science team Responsibilities Own data quality from existing data pipelines, development end-to-end quality framework spanning outages, freshness, accuracy and issue reporting Define best practices for quality development, engineering, and coding as part of a world class engineering team in PepsiCo. Work with the product team leveraging our core technology platforms, which include Direct Commerce, Supply Chain, Marketing Automation, Mobile, and Data. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead framework development in collaboration with other engineers; validate requirements / stories, assess current capabilities, and decompose manual quality related tasks , engineering tasks Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with QA engineers Develop software and partner with software teams like MonteCarlo / Great Expectations / Sentry in short iterations to quickly add business value Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers and operations teams members Qualifications 7+ years of experience in designing and building systems for collecting, storing, and analysing data at scale Required skills: Python, SQL,Airflow, AWS, Azure Prior experience with data quality frameworks and tools is a plus Strong interpersonal skills to effectively interact with client/onshore personnel to understand specific Enterprise Information Management requirement and develop solutions based on those requirement Experience in Retail,, Digital Commerce or Financial Services domain is preferred. Masters or Bachelor's degree in computer engineering or related analytics field from a premier/Tier1 institute of India Very strong analytical skills with the demonstrated ability to research and make decisions based on the day-to-day and complex customer problems required Hands on experience of handling large data sets and use of python /SQL for data analysis Strong record of achievement, solid analytical ability, and an entrepreneurial hands-on approach to work. Outstanding written and verbal communication skills Deep understanding of database design and engineering
Posted 1 week ago
5.0 - 10.0 years
16 - 20 Lacs
pune
Work from Office
As a Senior Data Architect, you will be instrumental in shaping the banks enterprise data landscapesupporting teams in designing, evolving, and implementing data architectures that align with the enterprise target state and enable scalable, compliant, and interoperable solutions. You will also serve as the go-to expert and trusted advisor on what good looks like in data architecture, helping to set high standards and drive continuous improvement across the organization. This role is ideal for an experienced data professional with deep technical expertise, strong solution architecture skills, and a proven ability to influence design decisions across both business and technology teams. Responsibilities 1. Enterprise Data Architecture & Solution Design Support teams in designing, evolving, and implementing data architectures that align with the enterprise target state and enable scalable, compliant, and interoperable solutions. Serve as the go-to person for data architecture best practices and standards, helping to define and communicate what good looks like to ensure consistency and quality. Lead and contribute to solution architecture for key programs, ensuring architectural decisions are well-documented, justified, and aligned to enterprise principles. Work with engineering and platform teams to design end-to-end data flows, integration patterns, data processing pipelines, and storage strategies across structured and unstructured data. Drive the application of modern data architecture principles including event-driven architecture, data mesh, streaming, and decoupled data services. 2. Data Modelling and Semantics Provide hands-on leadership in data modelling efforts, including the occasional creation and stewardship of conceptual, logical, and physical models that support enterprise data domains. Partner with product and engineering teams to ensure data models are fit-for-purpose, extensible, and aligned with enterprise vocabularies and semantics. Support modelling use cases across regulatory, operational, and analytical data assets. 3. Architecture Standards & Frameworks Define and continuously improve data architecture standards, patterns, and reference architectures that support consistency and interoperability across platforms. Embed standards into engineering workflows and tooling to encourage automation and reduce delivery friction. Measure and report on adoption of architectural principles using architecture KPIs and compliance metrics. 4. Leadership, Collaboration & Strategy Act as a technical advisor and architectural leader across initiatives mentoring junior architects and supporting federated architecture teams in delivery. Build strong partnerships with senior stakeholders across the business, CDIO, engineering, and infrastructure teams to ensure alignment and adoption of architecture strategy. Stay current with industry trends, regulatory changes, and emerging technologies, advising on their potential impact and application. Skills Extensive experience in data architecture, data engineering, or enterprise architecture, preferably within a global financial institution. Deep understanding of data platforms, integration technologies, and architectural patterns for real-time and batch processing. Proficiency with data architecture tools such as Sparx Enterprise Architect, ERwin, or similar. Experience designing solutions in cloud and hybrid environments (e.g. GCP, AWS, or Azure), with knowledge of associated data services. Hands-on experience with data modelling, semantic layer design, and metadata-driven architecture approaches. Strong grasp of data governance, privacy, security, and regulatory complianceespecially as they intersect with architectural decision-making. Strategic mindset, with the ability to connect architectural goals to business value, and communicate effectively with technical and non-technical stakeholders. Experience working across business domains including Risk, Finance, Treasury, or Front Office functions. Well-being & Benefits Emotionally and mentally balanced: we support you in dealing with life crises, maintaining stability through illness, and maintaining good mental health Empowering managers who value your ideas and decisions. Show your positive attitude, determination, and open-mindedness. A professional, passionate, and fun workplace with flexible Work from Home options. A modern office with fun and relaxing areas to boost creativity. Continuous learning culture with coaching and support from team experts. Physically thriving we support you managing your physical health by taking appropriate preventive measures and providing a workplace that helps you thrive Private healthcare and life insurance with premium benefits for you and discounts for your loved ones. Socially connected: we strongly believe in collaboration, inclusion and feeling connected to open up new perspectives and strengthen our self-confidence and wellbeing. Kids@TheOffice - support for unexpected events requiring you to care for your kids during work hours. Enjoy retailer discounts, cultural and CSR activities, employee sport clubs, workshops, and more. Financially secure: : we support you to meet personal financial goals during your active career and for the future Competitive income, performance-based promotions, and a sense of purpose. 24 days holiday, loyalty days, and bank holidays (including weekdays for weekend bank holidays).
Posted 1 week ago
2.0 - 6.0 years
10 - 15 Lacs
bengaluru
Work from Office
Role Description The Compliance and Anti-Financial Crime (CAFC) Data Office is responsible for Data Governance and Management across key functions including AFC, Compliance, and Legal. The team supports these functions in establishing and improving data governance to achieve critical business outcomes such as effective control operation, regulatory compliance, and operational efficiency. The CAFC Data Governance and Management team implements Deutsche Banks Enterprise Data Management Frameworkfocusing on controls, culture, and capabilitiesto drive improved data quality, reduce audit and regulatory findings, and strengthen controls. As a member of the Divisional Data Office, the role holder will support both Run-the-Bank and Change-the-Bank initiatives, with a particular focus on Financial Crime Risk Assessment (FCRA) data collation, processing, testing, and automation. Your key responsibilities Document and maintain existing and new processes; respond to internal and external audit queries and communicate updates clearly to both technical and non-technical audiences. Independently manage the FCRA data collection process including data collection template generation, quality checks, and stakeholder escalation. Execution of data cleansing and transformation tasks to prepare data for analysis. Perform variance analysis and develop a deep understanding of underlying data sources used in Financial Crime Risk Assessment. Documentation of data quality findings and recommendations for improvement/feeding into the technology requirements. Work with Data Architecture & developers to design and build data FCRA Risk data metrics. Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification. To ensure new data sources align with Deutsche Banks Data Governance standards, maintain metadata in Collibra, visualize data lineage in Solidatus, and ensure certification and control coverage. Automate manual data processes using tools such as Python, SQL, Power Query and MS excel to improve efficiency and reduce operational risk. Translate complex technical issues into simple, actionable insights for business stakeholders, demonstrating strong communication and stakeholder management skills. Your skills and experience 6+ years of experience in data management within financial services, with a strong understanding of data risks and controls. Familiarity with industry-standard frameworks such as DCAM or DAMA (certification preferred). Hands-on experience with Data cataloguing using Collibra, Data lineage documentation using Solidatus and Data control assessment and monitoring Proficiency in Python, SQL, and Power Query/excel for data analysis and automation. Strong communication skills with the ability to explain technical concepts to non-technical stakeholders. Proven ability to work independently and collaboratively across global teams.
Posted 1 week ago
5.0 - 10.0 years
8 - 13 Lacs
hyderabad
Work from Office
Position - Enterprise Finance Data Steward Manager (Mosaic) - SAP CO: COPA, Material Ledger, Product Costing Position Overview: The domain Data Steward role is responsible for working within the global data governance team and with their local businesses to maintain alignment to the Enterprise Data Governance's (EDG) processes, rules and standards set to ensure data is fit for purpose. This will be achieved through the EDG Data Steward operating as the single point of contact for those creating and consuming data within their respected data domain(s). Additionally, they will be driving the team to interact directly with key domain and project stakeholders, the EDG Lead, Governance Council, other data stewards across the organization and relevant SMEs throughout the organization as necessary. This position collaborates / advises with PepsiCo's Governance Council, of which they are accountable for the success of PepsiCos EDG program. Responsibilities Primary Accountabilities: Partner closely with the PepsiCo Financial Planning & Analysis (FP&A) team to ensure data requirements are met to enable timely, accurate and insightful reporting and analysis in support of FP&A digitization initiatives Promote data accuracy and adherence to PepsiCo defined global governance practices, as well as driving acceptance of PepsiCo's enterprise data standards and policies across the various business segments. Maintain and advise relevant stakeholders on data governance-related matters in the relevant data domains with a focus on the business use of the data. Monitor operational incidents, support root cause analysis and based on the recurrence propose ways to optimize the Data Governance framework (processes, Data Quality Rules, etc.) Provide recommendations and supporting documentation for new or proposed data standards, business rules and policy (in conjunction with the Governance Council as appropriate). Advice on various projects and initiatives to ensure that any data related changes and dependencies are identified, communicated, and managed to ensure adherence with the Enterprise Data Governance established standards. Represent market specific needs in Sector data councils and above, ensuring locals user needs are heard/met/addressed; Voice opinions around why proposals will or will not work for the market you represent and provide alternative solutions. Coordinate across the Sector (with fellow Market Data Stewards and the EDG Steward; strategic initiatives, Digital Use Cases and the federated data network) in order to maintain consistency of PepsiCo's critical enterprise, digital, operational and analytical data. Accountable for ensuring that data-centric activities are aligned with the EDG program and leverage applicable data standards, governance processes, and overall best practices. Data Governance Business Standards: Ensures alignment of the data governance processes and standards with applicable enterprise, business segment, and local data support models. Champions the single set of Enterprise-level data standards & repository of key elements pertaining to the finance domain and promoting their use throughout the PepsiCo organization. Owns one or multiple domain perspectives in defining and continually evolving the roadmap for enterprise data governance based upon strategic business objectives, existing capabilities/programs, cultural considerations and a general understanding of emerging technologies and governance models/techniques. Advise on various projects and initiatives to ensure that any data related changes and dependencies are identified, communicated, and managed to ensure adherence with the Enterprise Data Governance established standards. Data Domain Coordination and Collaboration: Responsible for helping identify the need for sector-level data standards (and above) based on strategic business objectives and the evolution of enterprise-level capabilities and analytical requirements. Collaborates across the organization to ensure consistent and effective execution of data governance and management principles across PepsiCo's enterprise and analytical systems and data domains. Accountable for driving organizational acceptance of EDG established data standards, policies, and definitions and process standards for critical / related enterprise data. Promotes and champions PepsiCo's Enterprise Data Governance Capability and data management program across the organization. Qualifications 5+ years of experience working in Data Governance or Data Management within a global CPG (Consumer Packaged Good) company; Strong data management background who understands data, how to ingest data, proper data use / consumption, data quality, and stewardship. 7+ years of experience working with data across multiple domains (with a particular focus on Finance data), associated processes, involved systems and data usage. Minimum of 5+ years functional experience working with and designing standards for data cataloging processes and tools. Ability to partner with both business and technical subject matter experts to ensure standardization of operational information and execution of enterprise-wide data governance policies and procedures are defined and implemented. Matrix management skills and business acumen Competencies: Strong knowledge and understanding of master data elements and processes related to data across multiple domains Understanding of operation usage of transactional data as it relates to financial planning. Strong Communication Skills/Able to Persuade/Influence Others at all Organization Levels and the ability foster lasting partnerships. Ability to translate business requirements into critical data dependencies and requirements Ability to think beyond their current state (processes, roles and tools) and work towards an unconstrained, optimized design. An ability to solicit followership from the functional teams to think beyond the way the things work today. Able to align various stakeholders to a common set of standards and the ability to sell the benefits of the EDG program. Foster lasting relationships across varying organizational levels and business segments with the maturity to interface with all levels of management Self-starter who welcomes responsibility, along with the ability to thrive in an evolving organization and an ability to bring structure to unstructured situations. Ability to arbitrate on difficult decisions and drive consensus through a diplomatic approach. Matrix management skills and business acumen Excellent written & verbal communication skills.
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
bengaluru
Work from Office
Position Description: AWS Data Engineer Job Description: We are seeking a skilled and motivated AWS Data Engineer with hands-on experience in building and managing data pipelines using AWS Glue, S3, and Athena. The ideal candidate will have a strong background in ETL processes, data modeling, and cloud-native data engineering practices Key Responsibilities: 5 years of experience in data engineering with focused on AWS. Design, develop, and maintain scalable ETL pipelines using AWS Glue , PySpark and Python Manage data lakes and data warehouses using Amazon S3, Athena, and Redshift. Develop and maintain Athena queries for data analysis and reporting. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Implement data quality checks, monitoring, and alerting for data pipelines. Ensure data security and compliance with organizational and regulatory standards. Optimize performance and cost of data workflows in the AWS environment. Implement best practices for data ingestion, transformation, and storage in a cloud-native environment. Monitor and troubleshoot data pipelines, ensuring high availability and performance. Mentor junior data engineers and contribute to team knowledge sharing and process improvement. Required Skills & Qualifications: Expert-level proficiency in AWS Glue, S3, Athena, and Lambda. Strong programming skills in Python, PySpark, and SQL. Experience with data modeling, schema design, and data warehousing concepts. Experience with data modeling, schema design, and data warehousing concepts. Experience with data lake architecture, partitioning strategies Familiarity with CI/CD, pipelines and infrastructure as code , Terraform/CloudFormation, and DevOps practices. Solid understanding of data governance, security, and compliance frameworks. Excellent problem-solving and communication skills. AWS Certified Data Analytics Specialty or equivalent certification Exposure to Redshift, Kinesis, or Kafka is a plus. Background in big data technologies like EMR, Hadoop, or Spark is a plus Exposure to machine learning pipelines and data science workflows is a plus Skills: Apache Kafka Apache Spark Data Migration Python Hadoop Ecosystem (HDFS)
Posted 1 week ago
10.0 - 20.0 years
20 - 35 Lacs
chennai, bengaluru
Hybrid
Role: Omnichannel Data Architect Key Responsibilities Review and document the current omnichannel campaign data collection setup across Paid, Owned, and Earned channels. Create a migration plan to move campaign data tracking and governance into the Adobe ecosystem (AEP, CJA, AJO). Define end-to-end campaign tagging governance frameworks, ensuring consistency, scalability, and compliance across global marketing teams. Partner with marketing, analytics, and technology teams to ensure data integrity and seamless tracking integration. Perform ongoing data quality checks and troubleshoot tagging or tracking issues. Work with stakeholders to implement best practices for campaign tracking Maintain a central campaign tracking repository to streamline reporting. Must Have Skills: Hands-on experience in marketing data architecture, digital analytics, and marketing technology. Deep knowledge of campaign tagging, metadata/taxonomy design, and governance frameworks. Expertise in Adobe Experience Cloud solutions Adobe Analytics / Customer Journey Analytics, Adobe Experience Platform, Adobe Journey Optimizer / Adobe Campaign, Adobe Target. Strong understanding of marketing attribution models (rule-based, algorithmic, multi-touch, data-driven). Ability to work across cross-functional teams. Good-to-Have Skills: Experience with Google Analytics / GA4. Familiarity with Advertising technologies (ad servers, DSPs, tracking pixels, programmatic platforms). Working knowledge of BI and visualization tools (Power BI, Tableau, Looker, etc.)
Posted 1 week ago
15.0 - 25.0 years
13 - 18 Lacs
bengaluru
Work from Office
About The Role Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Google Cloud Platform Architecture Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will design and deliver end-to-end data architecture solutions for platforms, products, or engagements on Google Cloud. You will define architectures that meet performance, scalability, security, and compliance requirements while ensuring data integrity and accessibility. You will be responsible for the successful implementation of data solutions that align with business strategy. Roles & Responsibilities:Expected to be a Subject Matter Expert (SME) with deep expertise in Google Cloud data architecture.Provide strategic guidance, influencing architectural decisions across multiple teams.Collaborate with stakeholders to define data strategies, roadmaps, and governance models.Design enterprise-grade data architectures supporting analytics, AI/ML, and operational workloads.Ensure solutions adhere to best practices for security, performance, and cost optimization.Lead the implementation of data architecture frameworks and reference models.Guide teams on data migration, integration, and modernization initiatives. Professional & Technical Skills: Must To Have Skills: Expertise in Google Cloud data services (BigQuery, Cloud Storage, Pub/Sub, Dataflow, Dataproc, etc.).Strong knowledge of data architecture principles, data modeling, and data governance.Proven experience in designing scalable, high-performance, and secure cloud-based data platforms.Hands-on experience with data ingestion, ETL/ELT, streaming, and batch processing.Familiarity with compliance frameworks and data security best practices in cloud environments. Additional Information:The candidate should have a minimum of 16 years of experience in data architecture, with a strong focus on Google Cloud.This position is based Pan India Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Master Data Governance MDG Tool Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business needs and technical specifications. Your role will require effective communication and coordination to facilitate smooth project execution and delivery. Roles & Responsibilities:- Candidate should have minimum 2 S/4 HANA Implementation project experience.- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing and mentoring among team members.- Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Master Data Governance MDG Tool.- Strong understanding of data governance principles and practices.- Experience with application design and configuration.- Ability to lead cross-functional teams and manage stakeholder expectations.- Familiarity with project management methodologies. Additional Information:- The candidate should have minimum 5 years of experience in SAP Master Data Governance MDG Tool.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Master Data Governance MDG Tool Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business needs and technical specifications, while fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Candidate should have minimum 2 S/4 HANA Implementation project experience.- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Master Data Governance MDG Tool.- Strong understanding of data governance principles and practices.- Experience with application design and configuration.- Ability to lead cross-functional teams effectively.- Familiarity with project management methodologies. Additional Information:- The candidate should have minimum 7.5 years of experience in SAP Master Data Governance MDG Tool.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
10 - 14 Lacs
coimbatore
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a skilled and detail-oriented Data Engineer / BI Developer with hands-on experience in Databricks and Power BI to join our data analytics team. The ideal candidate will be responsible for building scalable data pipelines, transforming data using Databricks, and creating insightful dashboards and reports in Power BI to support business decision-making.Data Engineer / BI Developer Databricks & Power BIKey Responsibilities:Design, develop, and maintain data pipelines using Databricks (PySpark/SQL).Integrate and transform large datasets from various sources into Databricks Lakehouse.Develop and publish interactive Power BI dashboards and reports.Optimize data models and queries for performance and scalability.Implement data quality checks and ensure data integrity across systems.Automate data workflows and reporting processes.Monitor and troubleshoot data pipeline issues.Required Skills: Proven experience with Databricks (including Delta Lake, Spark, and notebooks).Strong proficiency in Power BI (DAX, Power Query, data modeling).Solid understanding of SQL and data warehousing concepts.Experience with Azure Data Services (e.g., Azure Data Lake, Azure Synapse).Familiarity with ETL/ELT processes and tools. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data governance and security best practices. Qualification 15 years full time education
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |