Jobs
Interviews

139 Data Lineage Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

15 - 20 Lacs

Pune

Hybrid

EY is hiring for Leading Client for Data Governance Senior Analyst role for Pune location Role & responsibilities Coordinating with Data Srewards/Data Owners to enable identification of Critical data elements for SAP master Data Supplier/Finance/Bank master. Develop and maintain a business-facing data glossary and data catalog for SAP master data (Supplier, Customer, Finance (GL, Cost Center, Profit Center etc), capturing data definitions, lineage, and usage for relevant SAP master Data Develop and implement data governance policies, standards, and processes to ensure data quality, data management, and compliance for relevant SAP Master Data (Finance, Supplier and Customer Master Data) Develop both end-state and interim-state architecture for master data, ensuring alignment with business requirements and industry best practices. Define and implement data models that align with business needs and Gather requirements for master data structures. Design scalable and maintainable data models by ensuring data creation through single source of truth Conduct data quality assessments and implement corrective actions to address data quality issues. Collaborate with cross-functional teams to ensure data governance practices are integrated into all SAP relevant business processes. Manage data cataloging and lineage to provide visibility into data assets, their origins, and transformations in SAP environment Facilitate governance forums, data domain councils, and change advisory boards to review data issues, standards, and continuous improvements. Collaborate with the Data Governance Manager to advance the data governance agenda. Responsible to prepare data documentation, including data models, process flows, governance policies, and stewardship responsibilities. Collaborate with IT, data management, and business units to implement data governance best practices and migrate from ECC to S/4 MDG Monitor data governance activities, measure progress, and report on key metrics to senior management. Conduct training sessions and create awareness programs to promote data governance within the organization. Demonstrate deep understanding of SAP (and other ERP system such as JD Edwards etc.) master data structures such as Vendor, Customer, Cost center, Profit Center, GL Accounts etc. Summary: SAP Master Data (Vendor, Customer, GL, Cost Center, etc.) Data Governance Implementation (Transactional & Master Data) Data Modeling & Architecture (S/4HANA, ECC) Data Cataloging, Lineage, and Quality Assessment Governance Forums & Change Advisory Boards Experience in S/4HANA Greenfield implementations Migration Experience (ECC to S/4 MDG) Preferred candidate profile 8-14 years in data governance and SAP master data Strong understanding of upstream/downstream data impacts Expert in data visualization

Posted 3 weeks ago

Apply

7.0 - 12.0 years

22 - 30 Lacs

Mumbai

Hybrid

Type of Candidate They Want: Strong experience with data governance tools (especially Informatica) Experience building data policies and quality frameworks Knows privacy laws and regulatory standards Worked with cloud platforms like AWS, Azure, or GCP Can manage cross-functional teams , conduct meetings, and influence business leaders

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad

Work from Office

A seasoned data professional with 5+ years in data governance, stewardship, and DAMA DMBoK practices. Strong SQL, metadata management, regulatory compliance (e.g., GDPR) stakeholder engagement skills required. CDMP certification is a plus!

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a PySpark Data Engineer, you must have a minimum of 2 years of experience in PySpark. Strong programming skills in Python, PySpark, and Scala are preferred. It is essential to have experience in designing and implementing CI/CD, Build Management, and Development strategies. Additionally, familiarity with SQL and SQL Analytical functions is required, along with participation in key business, architectural, and technical decisions. There is an opportunity for training in AWS cloud technology. In the role of a Python Developer, a minimum of 2 years of experience in Python/PySpark is necessary. Strong programming skills in Python, PySpark, and Scala are preferred. Experience in designing and implementing CI/CD, Build Management, and Development strategies is essential. Familiarity with SQL and SQL Analytical functions and participation in key business, architectural, and technical decisions are also required. There is a potential for training in AWS cloud technology. As a Senior Software Engineer at Capgemini, you should have over 3 years of experience in Scala with a strong project track record. Hands-on experience in Scala/Spark development and SQL writing skills on RDBMS (DB2) databases are crucial. Experience in working with different file formats like JSON, Parquet, AVRO, ORC, and XML is preferred. Previous involvement in a HDFS platform development project is necessary. Proficiency in data analysis, data profiling, and data lineage, along with strong oral and written communication skills, is required. Experience in Agile projects is a plus. For the position of Data Modeler, expertise in data structures, algorithms, calculus, linear algebra, machine learning, and modeling is essential. Knowledge of data warehousing concepts such as Star schema, snowflake, or data vault for data mart or data warehousing is required. Proficiency in using data modeling software like Erwin, ER studio, MySQL Workbench to produce logical and physical data models is necessary. Hands-on knowledge and experience with tools like PL/SQL, PySpark, Hive, Impala, and other scripting tools are preferred. Experience with Software Development Lifecycle using the Agile methodology is essential. Strong communication and stakeholder management skills are crucial for this role. In this role, you will design, develop, and optimize PL/SQL procedures, functions, triggers, and packages. You will also write efficient SQL queries, joins, and subqueries for data retrieval and manipulation. Additionally, you will develop and maintain database objects such as tables, views, indexes, and sequences. Optimizing query performance and troubleshooting database issues to improve efficiency are key responsibilities. Collaboration with application developers, business analysts, and system architects to understand database requirements is essential. Ensuring data integrity, consistency, and security within Oracle databases is also a crucial aspect of the role. Developing ETL processes and scripts for data migration and integration are part of the responsibilities. Documenting database structures, stored procedures, and coding best practices is required. Staying up-to-date with Oracle database technologies, best practices, and industry trends is essential for success in this role.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

The Data Governance Specialist - Finance & Treasury will play a pivotal role in defining and implementing a Data Governance strategy for Finance & Treasury. You will need to partner closely with the Finance, Fin Ops, and Treasury Management Teams, Business Leaders (Data Providers), CDO, GLRR, Risk, CFOs, COOs, and CIOs to assist in various activities. These activities include, but are not limited to, assisting in the delivery of the S166 Liquidity remediation activity across Finance & Treasury, implementing BAU Liquidity processes to comply with Data Quality Management Standard (DQMS), collaborating with the S166 Liquidity Programme teams, supporting the Head of Data Governance & Compliance, conducting Data Analysis on data lineage flows, escalating breaches of Data Quality Management Framework (DQMF), working with Technology to establish a standardised toolset, supporting the Head of DG in ensuring DG BU Metrics via MDM, and more. Ensuring full compliance with DQMS and promptly escalating any high elevated data risk issues for timely resolution will be a key strategy. Working closely with upstream business functions to track and monitor remediation activity, overseeing the remediation activity to support s166 Liquidity work, and ensuring integrity and quality of DQMF artefacts will be important business processes. You will also be responsible for developing training and awareness programs for Data governance across all Finance and Treasury teams, fostering a culture of data management, identifying, assessing, monitoring, controlling, and mitigating risks relevant to F&T data governance, and ensuring accurate and quality updates are presented in the DQ Governance forums. The role will require you to work with a wide range of key stakeholders such as Group CFO, Head of Finance, Group Treasurer, Finance, Fin Ops, and Treasury Management Teams, Head of Data Management, Finance & Treasury, Head of Data Governance, Finance & Treasury, BCBS239 programme team, Business COOs & Business Leaders, CFOs, CIOs, CDO, Risk, Audit and Compliance, External Consultants / Agents, and Regulators. Qualifications required for this role include an MBA (Fin) or Masters in Finance/Accountancy/Economics or affiliated subjects, DCAM Professional Certification, minimum of 7 to 10 years of experience in Data governance and Data management, good knowledge of Finance Domains and BU metrics, ability to analyze data to drive greater insight for business, and proficiency in working with MS-Excel and SQL codes. Experience in visualization tools such as Tableau, Power BI, or Qlik would be a plus. If you are interested in this opportunity, please visit our website via the Apply button below for further information and to apply.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

At EY, you will have the opportunity to shape a career that reflects your unique qualities, supported by a global presence, inclusive environment, and cutting-edge technology to help you reach your full potential. Your voice and perspective are crucial in contributing to EY's continuous improvement. Join us to create an outstanding experience for yourself while making a positive impact on the working world for everyone. As an OFSAA Senior, your role involves leading and overseeing OFSAA implementation and consulting projects, managing engagements at the practice level, driving business growth, and ensuring the successful delivery of projects within budget and quality standards. Your responsibilities include: Client Interaction: - Demonstrate excellent communication and presentation skills for client engagement. - Work with clients across various stages of implementation projects. - Identify and pursue innovative opportunities to expand the practice's reach within the client ecosystem. - Direct consulting resources to support clients in implementing OFSAA solutions. - Assess and mitigate business risks while pursuing practice goals. - Maintain strategic direction, ensure practice profitability, uphold consulting quality, and enhance customer satisfaction. Team Leadership: - Possess experience in OFSAA implementations or a background in Financial Services with similar solution implementation expertise. - Ability to lead large teams effectively to deliver exceptional client service. - Proficiency in managing ETL (e.g., ODI, INFORMATICA) and Reporting (e.g., OBIEE, POWERBI) applications. - Oversee people management, portfolio/delivery management, and sales enablement within the practice. - Be accountable for operational, financial, and people metrics, as well as overall business outcomes. - Familiarity with OFSAA solutions like EPM, ERM, FCCM, IFRS, and related technologies. Additional Requirements: - Execute large/medium OFSAA programs and demonstrate advanced consulting skills and industry expertise. - Contribute to business development activities such as presales and practice expansion. - Manage consultancy assignments and exhibit strong data lineage understanding. - Ensure customer satisfaction and delivery excellence through end-to-end accountability. - Prioritize project deliveries in collaboration with the implementation team. - Approach problems proactively, logically, and systematically, presenting clear solutions. - Display willingness to learn and adapt to evolving requirements. Join EY in its mission to build a better working world by delivering long-term value for clients, promoting trust in capital markets, and driving growth and transformation through diverse teams worldwide. With a focus on assurance, consulting, law, strategy, tax, and transactions, EY teams tackle complex global challenges by asking better questions and finding innovative solutions.,

Posted 3 weeks ago

Apply

3.0 - 6.0 years

18 - 30 Lacs

Mumbai

Work from Office

Hello Connections, Greetings from Teamware Solutions !! We are #Hiring for Top Investment Bank Position: Data Analyst Location: Mumbai Experience Range: 3 to 6 Years Notice Period: Immediate to 30 Days Must have skills: data Analyst , data catalog, Data analytics, data governance along with collibra. What You'll Do - As part of the Data & Analytics Group (DAG) and reporting to the Head of the India DAG locally, the individual is responsible for the following 1. Review, analyze, and resolve data quality issues across IM Data Architecture 2. Coordinate with data owners and other teams to identify root-cause of data quality issues and implement solutions. 3. Coordinate the onboarding of data from various internal / external sources into the central repository. 4. Work closely with Data Owners/Owner delegates on data analysis and development data quality (DQ) rules. Work with IT on enhancing DQ controls. 5. End-to-end analysis of business processes, data flow s, and data usage to improve business productivity through re-engineering and data governance. 6. Manage change control process and participate in user acceptance testing (UAT) activities. What Were Looking For 1. Minimum 3- 6 years experience in data analysis, data catalog & Collibra. 2. Experience in data analysis and profiling using SQL is a must 3. Knowledge in coding, Python is a plus 4. Experience in working with cataloging tools like Collibra 5. Experience working with BI reporting tools like Tableau, Power BI is preferred. Preferred Qualifications: 1. Bachelors Degree required and any other relevant academic course a plus. 2. Fluent in English Apply now : francy.s@twsol.com

Posted 3 weeks ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Job title: Business Analyst Responsibilities : Analytical Support : Gather all operational and financial data across all centers to provide inputs into the weekly MIS as well as a Monthly Review Meeting. Drive meaningful weekly / monthly reports that will help the regional Managers to take decisions on their centers health Analyse financial data (budgets, income statements, etc.) to understand Oasis Fertility's financial health. Coordinate all operational issues captured at center level and program manager the closure through cross functional collaboration Evaluate operational expenditures (OPEX) and capital expenditures (Capex) against the budget to identify variances. Analyse operational data to identify trends and areas for improvement. Conduct ad-hoc analytics towards a hypothesis and derive insights that will impact business performance Operational support : Coordinate assimilation of data for calculating doctor payouts and facilitate the final file to finance Coordinate and assimilate data to calculate incentives for the eligible operations team members. Use key metrics like yearly growth, return on assets (ROA), return on equity (ROE), and earnings per share (EPS) to assess operational performance. Collaborate with the operations and finance teams to ensure alignment between operational and financial goals. Strategic Support : Conduct business studies to understand past, present, and potential future performance. Conduct market research to stay updated on financial trends in the fertility industry. Evaluate the effectiveness of current processes and recommend changes for better efficiency. Develop data-driven recommendations to improve operational efficiency. Prepare financial models to assess the profitability of different business units and potential investment opportunities. Participate in process improvement initiatives and policy development to optimize business functions.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Title: Data Governance & Quality Specialist Experience: 3-8 Years Location: Bangalore (Hybrid) Domain: Financial Services Notice Period: Immediate to 30 Days What Youll Do: Define and enforce data-governance policies (BCBS 239/GDPR) across credit-risk datasets Design, monitor and report on data-quality KPIs; perform profiling & root-cause analysis in SAS/SQL Collaborate with data stewards, risk teams and auditors to remediate data issues Develop governance artifacts: data-lineage maps, stewardship RACI, council presentations Must Have: 38 yrs in data-governance or data-quality roles (financial services) Advanced SAS for data profiling & reporting; strong SQL skills Hands-on with governance frameworks and regulatory requirements Excellent stakeholder-management and documentation abilities Nice to Have: Experience with Collibra, Informatica or Talend Exposure to credit-risk model inputs (PD/LGD/EAD) Automation via SAS macros or Python scripting If Interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 3 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Job Description: Strong change and project management skills Stakeholder Management, Communications, Reporting Data management, data governance, and data quality management domain knowledge Subject Matter Expertise required in more than one of the following areas- Data Management, Data Governance, Data Quality Measurement and Reporting, Data Quality Issues Management. Liaise with IWPB markets and stakeholders to coordinate delivery of organizational DQ Governance objectives, and provide consultative support to facilitate progress Conduct analysis of IWPB DQ portfolio to identify thematic trends and insights, to effectively advise stakeholders in managing their respective domains Proficiency in MI reporting and visualization is strongly preferred Proficiency in Change and Project Management is strongly preferred. Ability to prepare programme update materials and present for senior stakeholders, with prompt response any issues / escalations Strong communications and Stakeholder Management skills: Should be able to work effectively and maintain strong working relationships as an integral part of a larger team 8+ yrs of relevant experience preferred.

Posted 4 weeks ago

Apply

10.0 - 17.0 years

20 - 35 Lacs

Chennai

Work from Office

must have - Data governance/Data OPS Work from Chennai office 5 days a week EXP 10+ yr (considering 8+ yr) CTC 35 LPA Should have EDC development exp on data discovery, data domain creation, relationship, profiling, Data Lineage, Data curation etc Required Candidate profile Exposure in architecting a data governance solution at the enterprise level using Informatica tool AXON Must be able to integrate with other tools and Informatica tools like EDC, IDQ etc

Posted 4 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Noida

Work from Office

Must be: Bachelors or Masters degree in Computer Science, Information Technology, or a related discipline. 35+ years of experience in SQL Development and Data Engineering . Strong hands-on skills in T-SQL , including complex joins, indexing strategies, and query optimization. Proven experience in Power BI development, including building dashboards, writing DAX expressions, and using Power Query . Should be: At least 1+ year of hands-on experience with one or more components of the Azure Data Platform : Azure Data Factory (ADF) Azure Databricks Azure SQL Database Azure Synapse Analytics Solid understanding of data warehouse architecture , including star and snowflake schemas , and data lake design principles. Familiarity with: Data Lake and Delta Lake concepts Lakehouse architecture Data governance , data lineage , and security controls within Azure

Posted 1 month ago

Apply

8.0 - 12.0 years

35 - 45 Lacs

Hyderabad, Pune, Delhi / NCR

Hybrid

Write specifications for Master Data Management builds Create requirements, including rules of survivorship, for migrating data to Markit EDM implementation of data governance Support testing Develop data quality reports for the data warehouse Required Candidate profile 5+ years of experience documenting data management requirements Experience writing technical specifications for MDM builds Familiarity with enterprise data warehouse Knowledge of data governance

Posted 1 month ago

Apply

8.0 - 12.0 years

40 - 45 Lacs

Bhubaneswar, Bengaluru, Delhi / NCR

Hybrid

Write specifications for Master Data Management builds Create requirements, including rules of survivorship, for migrating data to Markit EDM implementation of data governance Support testing Develop data quality reports for the data warehouse Required Candidate profile 5+ years of experience documenting data management requirements Experience writing technical specifications for MDM builds Familiarity with enterprise data warehouse Knowledge of data governance

Posted 1 month ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Pune

Work from Office

Role & responsibilities: Analyse, manage, and maintain entity reference data to ensure accuracy, consistency, and completeness across systems Collaborate with product and technology teams to design, develop, and implement new features and controls for our reference data products Identify opportunities for product improvement and process optimization, providing actionable recommendations based on thorough data analysis Conduct root cause analysis of data issues, propose solutions, and oversee their implementation. Define and translate business requirements into functional specifications for developers and QA teams Evaluate and integrate new data sources, ensuring alignment with business needs and data governance standards Utilize SQL for data extraction, transformation, and analysis; automate data quality checks and reporting. Leverage exploratory data tools (such as Alteryx, KNIME, or similar) to build workflows, analyse large datasets, and generate insights Monitor data quality metrics, develop dashboards, and prepare management reports Engage with stakeholders to gather requirements, document processes, and communicate findings effectively Support ongoing data governance, compliance, and audit requirements within the reference data domain. Preferred candidate profile 8-10 years of relevant experience in entity reference data management, preferably within financial services, fintech, or data-centric organizations Proven expertise in data analysis, data quality management, and process improvement. Advanced proficiency in SQL for querying, analysing, and managing large datasets Hands-on experience with at least one leading data exploration or ETL tool (e.g., Alteryx, KNIME, etc.) Strong understanding of data governance, data standards, and regulatory compliance in the context of reference data Excellent problem-solving skills, with the ability to work independently on complex data challenges. Strong communication skills, both verbal and written, with the ability to interact effectively with technical and business stakeholders Bachelors or Master’s degree in Computer Science, Engineering, Finance, Business, or a related field.

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 20 Lacs

Pune

Hybrid

A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. Medtronic is hiring a Senior Data Governance Engineer. As a Senior engineer, you will be a key member of our Data Governance team, defining scope, assessing current data governance capabilities, building roadmaps to mature data governance capabilities. This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction. Responsibilities may include the following and other duties may be assigned. Data Governance Strategy Development - responsible for the strategic development, architectural definition, and enterprise-wide data governance and data management initiatives supporting the delivery of data as a service Provide Data Governance and Data Management advisory expertise Responsible for the identification and evaluation of metadata platform alternatives, the development of the logical metadata framework architecture, and the definition and implementation of metadata maintenance processes Defining Data Governance operating models, roles, and responsibilities in collaboration with Medtronic requirements Responsible for the assessment and selection of Data Management tools landscape including: data profiling, data quality, metadata, MDM, rules engines, and pattern matching Refine and develop the data-centric approach and methodologies which may include areas such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Assess current state capabilities, identify gaps versus leading practices, and recommend future state data requirements Identify opportunities for new workflows, third party glossary/metadata tools, database architectures, and third-party pre-existing data models. Work closely with business and technology stakeholders to assist in understanding and documenting data requirements, lineage, and partnering with IT data integration and data warehouse teams to ensure requirements are effectively executed Help define data modeling naming standards, abbreviations, guidelines, and best practices Enhance or design data model review processes based on business requirements Minimum Qualification: At least 5 years of experience developing / structuring an enterprise wide data governance organization and business process (operating models, roles, partner organizations, responsibilities) Hands-on experience with both the business side and IT side implementing or supporting an MDM and/or Data Warehouse and Reporting IT solutions Utilize strong business knowledge of the investment management industry and common data management operations Broad understanding of the role of data management within global markets organizations, information flow, and data governance issues Domain expertise in specific areas of Data Management such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Note : Immedite or Notice period serving candidates preferred. If interested please share your updated CV on ashwini.ukekar@medtronic.com

Posted 1 month ago

Apply

7.0 - 11.0 years

11 - 21 Lacs

Pune

Work from Office

Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. Medtronic is hiring a Senior Data Governance Engineer. As a Senior engineer, you will be a key member of our Data Governance team, defining scope, assessing current data governance capabilities, building roadmaps to mature data governance capabilities. This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction. Responsibilities may include the following and other duties may be assigned: Data Governance Strategy Development - responsible for the strategic development, architectural definition, and enterprise wide data governance and data management initiatives supporting the delivery of data as a service. Provide Data Governance and Data Management advisory expertise Responsible for the identification and evaluation of metadata platform alternatives, the development of the logical metadata framework architecture, and the definition and implementation of metadata maintenance processes. Defining Data Governance operating models, roles, and responsibilities in collaboration with Medtronic requirements. Responsible for the assessment and selection of Data Management tools landscape including: data profiling, data quality, metadata, MDM, rules engines, and pattern matching Refine and develop the data-centric approach and methodologies which may include areas such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Assess current state capabilities, identify gaps versus leading practices, and recommend future state data requirements. Identify opportunities for new workflows, third party glossary/metadata tools, database architectures, and third party pre-existing data models. Work closely with business and technology stakeholders to assist in understanding and documenting data requirements, lineage, and partnering with IT data integration and data warehouse teams to ensure requirements are effectively executed. Help define data modeling naming standards, abbreviations, guidelines, and best practices Enhance or design data model review processes based on business requirements. Required Knowledge and Experience: At least 5 years of experience developing / structuring an enterprise-wide data governance organization and business process (operating models, roles, partner organizations, responsibilities). Hands-on with both the business side and IT side implementing or supporting an MDM and/or Data Warehouse and Reporting IT solutions. Utilize strong business knowledge of the investment management industry and common data management operations. Broad understanding of the role of data management within global markets organizations, information flow, and data governance issues. Domain expertise in specific areas of Data Management such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery.

Posted 1 month ago

Apply

5.0 - 10.0 years

35 - 45 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Write specifications for Master Data Management builds Create requirements, including rules of survivorship, for migrating data to Markit EDM implementation of data governance Support testing Develop data quality reports for the data warehouse Required Candidate profile 5+ years of experience documenting data management requirements Experience writing technical specifications for MDM builds Familiarity with enterprise data warehouse Knowledge of data governance

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Role & responsibilities Job Description: As a Data Governance Architect, you must be able to manage organization-wide data governance activities and will be responsible for improving the quality and managing the protection of sensitive data and information assets. You will be responsible for preparing a Data Catalog strategy to build out a catalog with data and BI objects and onboard a user base to support the curation of metadata, lineage, and documentation for enabling seamless data discovery at an enterprise level, thereby streamlining data intake, and reducing data duplication throughout the organization. You must be result-oriented, self-motivated and can thrive in a fast-paced environment. This role requires you to serve as a point of escalation for governance, data quality and protection issues and will work closely with Business and Functional area leadership to improve the quality and value of core data assets, respond to regulatory protection requirements as well as support the strategic requirements of the department. Primary Roles and Responsibilities: • Looking for a Data Governance expert for the development of a metadata management system solution. Should be able to streamline the curation of metadata with custom scripts to upload available metadata to the API to achieve a deeper understanding of their catalog content and user base using custom dashboards to track adoption. • Responsible for the implementation and oversight of the Companys data management goals, standards, practices, process, and technologies. • Experience in establishing data connections for relevant schemas, defining data stewards role & responsibilities for the scope of the data catalog. • Define roles and responsibilities related to data governance and ensure clear accountability for stewardship of the companys principal information assets • To properly onboard the data catalog, you should be able to conduct a data domain team assessment, discover the availability & completeness of each teams metadata, and develop a process for working with and onboarding data domain teams. • Be the point of contact for Data Governance queries, including escalation point for client concerns. • Coordinate the resolution of data integrity gaps by working with the business owners and IT. • Ability to work in an agile environment with an iterative approach to development. Skills and Qualifications: Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 4+ years' experience in Data Cataloging & Data Governance projects. Programming skills (sufficient to write SQL queries to validate test results in DW database). Proficient in SQL, Python, and must have strong understanding of databases and data structures. Experience with Application programming interface (API) development skills, essential in developing & running API scripts across multiple devices, databases, and servers, working with REST, and open API technologies. Proficient in working with Enterprise Data Catalog software like Alation, Collibra etc. Experience with dashboarding of reporting tools (Power BI, Tableau etc.) is a plus. Excellent analytical, problem-solving, communication and interpersonal skills. Ability to set priorities and multi-task in a fast-paced environment. Experience in metadata management - Business Glossary, Lineage, data dictionaries, ETL is essential. Ability to work independently and productively under pressure. Strong organizational skills and decision-making ability

Posted 1 month ago

Apply

4.0 - 6.0 years

12 - 20 Lacs

Pune, Bengaluru

Hybrid

Job Role & responsibilities:- Understanding operational needs by collaborating with specialized teams • Supporting key business operations. This involves architecture data flow, data lineage and building data systems Help to identify and deploy enterprise data best practices such as data scoping, metadata standardization, lineage, data deduplication, mapping and transformation, and business validations. Own development and management of data glossaries and data owner matrices to establish enterprise data standards pertaining to the use of critical data. Assist with deploying data issue capture and resolution process. Engage with key business stakeholders to assist with establishing fundamental data governance processes Create, prepare, and standardize data quality reports for internal analysis. Define key data quality metrics and indicators and facilitate the development and implementation of supporting standards. Technical skills ,Experience & Qualification required:- 6-10 years of experience in Data Governancev Hands on experience in Collibra tool Experience who is proficient in data management, understands data model frameworks, and has practical knowledge of MDM Hands-on experience on working with data catalog tool as Collibra. Hands on experience on Collibra, Data governance and Data quality aspects; working with python Understanding of Cloud ServicesAzure Good communication and interpersonal skills Bachelors Degree in Computer Science or related field Soft skills and competencies: - Good communication skills to coordinate between business stakeholders & engineers Strong results-orientation and time management True team player who is comfortable working in a global team Ability to establish relationships with stakeholders quickly in order to collaborate on use cases Autonomy, curiosity and innovation capability Comfortable working in a multidisciplinary team within a fast-paced environment * Only Immediate Joiners will be preferred. Outstaion candidate's will not be considered

Posted 1 month ago

Apply

3.0 - 5.0 years

4 - 6 Lacs

Mumbai

Work from Office

Data Engineer-Data Integration (Informatica skilled with experience in Data Lineage feature and flavour of delivery management) Designs and builds solutions to move data from operational and external environments to the business intelligence environment using Informatica. Skills include designing and developing extract, transform and load (ETL) processes. Experience includes full lifecycle implementation of the technical components of a business intelligence solution. Team management expertise for at least 1520-member team including responsibility of teams deliverables. Experience in direct client interaction on-site. Experience in escalation management. Experience in manage client engagements, ensuring delivery of solutions for complex IT, business and client requirements. Responsible for leading a project team in delivering a solution to the client using the appropriate business measurements and terms and conditions for the project according to the project charter, project agreement or contract. Have overall performance responsibility for managing scope, cost, schedule, and contractual deliverables, which includes applying techniques for planning, tracking, change control, and risk management. Are responsible for managing all project resources, including subcontractors, and for establishing an effective communication plan with the project team and the client. Provide day to day direction to the project team and regular project status to the client.

Posted 1 month ago

Apply

5.0 - 8.0 years

18 - 30 Lacs

Hyderabad

Remote

Key Responsibilities Atlan Deployment & Connector Setup : Configure Atlan (SaaS or private cloud), set up connectors for Data Bricks, Hadoop, Power BI, and schedule metadata ingestion pipelines. Metadata Modeling & Domain Onboarding : Work with domain owners to map schemas, define custom metadata attributes (sensitivity, owner, SLA), and create standardized ingestion playbooks for new data domains. Lineage Instrumentation & Data Profiling : Instrument column-level lineage via OpenLineage or native Atlan connectors, configure automated profiling jobs (row counts, null rates) to surface data quality metrics. Governance Policy Implementation : Translate policies (PII detection, data retention) into Atlans rule engine, configure RBAC and SSO/LDAP integration, and implement encryption/masking for sensitive datasets. Monitoring & Troubleshooting : Build monitoring dashboards (CloudWatch, Grafana) to track ingestion health and API errors, diagnose pipeline failures, and coordinate fixes with Atlan Support and source-system teams. Required Qualifications 3-5 years implementing or supporting ( hands-on ) Atlans platform (catalog, lineage, policy automation) (Good to have: Collibra, Alation experience). Proficiency in Python or JavaScript for API integrations, strong SQL skills, and hands-on experience with ETL/ELT frameworks. Familiarity with cloud platforms (AWS/GCP), containerization (Kubernetes/Docker), and scripting infrastructure as code (Terraform, CloudFormation). Solid understanding of metadata concepts (technical vs. business metadata, lineage, profiling) and data classification schemes (PII, PCI, PHI). Strong stakeholder-engagement skills: able to run onboarding sessions and create clear runbooks. Bachelor’s in computer science, Data Engineering, or related field; relevant cloud or Atlan certifications a plus

Posted 1 month ago

Apply

5.0 - 10.0 years

35 - 40 Lacs

Bengaluru

Hybrid

Expert in the Operating Model & AJG data governance, SOP’s for Collibra, Collibra Data Catalog KPI, manual stitching of assets in Collibra, Technical skills. Workflow Design & Stakeholder Management, Hands on exp in Data Governance & Collibra. Required Candidate profile Implementation, configuration, and maintenance of the Collibra Data Governance Platform Stewards, data owners, stakeholders, data governance, data quality, and data integration principles.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies