Jobs
Interviews

455 Metadata Management Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Principal Data Engineer at Skillsoft, you will play a crucial role in driving the advancement of Enterprise data infrastructure by designing and implementing the logic and structure for how data is set up, cleansed, and stored for organizational usage. You will be responsible for developing a Knowledge Management strategy to support Skillsoft's analytical objectives across various business areas. Your role will involve building robust systems and reusable code modules to solve problems, working with the latest open-source tools and platforms to build data products, and collaborating with Product Owners and cross-functional teams in an agile environment. Additionally, you will champion the standardization of processes for data elements used in analysis, establish forward-looking data and technology objectives, manage a small team through project deliveries, and design rich data visualizations and interactive tools to communicate complex ideas to stakeholders. Furthermore, you will evangelize the Enterprise Data Strategy & Execution Team mission, identify opportunities to influence decision-making with supporting data and analysis, and seek additional data resources that align with strategic objectives. To qualify for this role, you should possess a degree in Data Engineering, Information Technology, CIS, CS, or related field, along with 7+ years of experience in Data Engineering/Data Management. You should have expertise in building cloud data applications, cloud computing, data engineering/analysis programming languages, and SQL Server. Proficiency in data architecture, data modeling, and experience with technology stacks for Metadata Management, Data Governance, and Data Quality are essential. Additionally, experience in working cross-functionally across an enterprise organization and an Agile methodology environment is preferred. Your strong business acumen, analytical skills, technical abilities, and problem-solving skills will be critical in this role. Experience with app and web analytics data, CRM, and ERP systems data is a plus. Join us at Skillsoft and be part of our mission to democratize learning and help individuals unleash their edge. If you find this opportunity intriguing, we encourage you to apply and be a part of our team dedicated to leadership, learning, and success at Skillsoft. Thank you for considering this role.,

Posted 2 months ago

Apply

3.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Data Governance Consultant at KPMG in Bangalore, you will play a key role in developing and implementing data governance strategies and frameworks. Your responsibilities will include leading data quality management initiatives, managing metadata, collaborating with stakeholders on change management processes, and providing guidance on best practices to internal teams. To be successful in this role, you should have a minimum of 3 years of experience in a data governance role, with a total experience ranging from 3 to 12 years. Proficiency in data governance concepts, frameworks, and data quality management principles is essential. Experience with metadata management tools and change management processes will be beneficial. Excellent communication, stakeholder management skills, and the ability to work effectively in a cross-functional team environment are also required. KPMG offers a competitive salary package, health insurance coverage, opportunities for professional development and growth, and a dynamic and collaborative work environment. If you have a background in data governance practices and tools and are passionate about driving data quality and implementing data governance frameworks, we invite you to join our team in Bangalore. This is an Equal Opportunity Employer and we encourage candidates with a Full-Time education background in B.E/B.Tech/BCA/MBA/MCA/BBA/MBA to apply.,

Posted 2 months ago

Apply

13.0 - 17.0 years

0 Lacs

punjab

On-site

You will be responsible for leading and supporting enterprise master data programs to deliver Bunge's strategic initiatives covering Digital programs, Data Integration, and S4 Hana implementation for Finance data domain. As the Global Lead - Finance and Local Techno Functional Lead for Core Finance Master Data, you will be accountable for developing business solutions, ensuring Data Quality, and successfully implementing solutions across all geographic regions and Bunge's businesses. Your role will involve driving alignment across multiple business functional areas to define and execute project scope and deliverables. As a techno-functional expert in Master Data Management for Finance data types such as Cost center, GL, Profit center, and Company code, you will collaborate with various Bunge stakeholders globally from Business, IT, and other areas to define and achieve mutually agreed outcomes in the master data domains. You will work closely with Business Subject Matter Experts, Business Data Owners, Business functional area leaders, IT teams, Solution Architects, Bunge Business Services leaders, Delivery Partner teams, Enterprise Architecture (IT), and other IT teams to ensure seamless execution of integrated technology solutions aligned with business needs. Your key functions will include being responsible for end-to-end business requirements, engaging with business to gather requirements, defining project scope and deliverables, leading business UAT, managing scope and deliverables of strategic projects, driving implementation of master data solutions, building relationships with internal and external service providers, guiding project teams, maintaining in-depth understanding of processes, creating and maintaining data policies, leading Continuous Improvement initiatives, and much more. To be successful in this role, you should have a minimum of 13-15 years of professional data management experience, including at least 8-10 years of providing business solutions and working experience in SAP HANA & MDG/MDM. You should have strong leadership skills, experience in managing project teams, and the ability to work in a virtual team across different locations and time zones. Additionally, having knowledge and expertise in technologies such as SAP MDG, S4 HANA, Data Lake, Data Model, and MDM will be beneficial for this role.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Retoucher, you will be responsible for creating and retouching digital art, executing graphic designs, logos, and more. Your primary tasks will include providing detailed retouching, color correction, and timely delivery of image files as per the instructions provided by Content Studio Lead Producers or Project Management Lead. You will be required to retouch all photography assets for various campaigns, including digital, social, and OOH campaigns. Collaboration with photographers and creative teams on markups is crucial in this role, to ensure the quality and accuracy of the final output. It is also essential to maintain digital asset organization by following file naming conventions, managing metadata, and adding relevant tags. Furthermore, you will be responsible for overseeing the delivery of final assets to the digital asset management system. In addition to the retouching responsibilities, you may also be required to assist the in-house photographer during shoots as a digital tech when needed. Your role as a Retoucher requires a high level of attention to detail, creativity, and collaboration to deliver high-quality digital art and graphic designs.,

Posted 2 months ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking for a senior level professional who combines expertise in data strategy, product management and consulting. The primary role is to guide organizations in building and scaling data driven products ensuring that data assets are effectively turned into valuable product or services In this role, you will: Lead complex data product initiatives, including those that are cross-functional with broad impact, and act as key participant in large-scale planning driving data enablement and capabilities across platforms and utilities Review and analyze complex multi-faceted, larger scale or longer-term data product initiatives that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors to drive data enablement strategies and roadmaps, while adhering to set data governance and standards Make decisions in complex and multi-faceted situations requiring solid understanding of data, analytics, and integration needs of line of business partners to inform prioritization, roadmap, and architecture design, that influence and lead broader work team to meet deliverables and drive new initiatives Strategically collaborate and consult with peers, colleagues, and mid-level to senior managers to ensure data product solutions are built for optimal performance and design analytics applications across multiple platforms, resolve data product issues, and achieve goals; may lead projects, teams or serve as a peer mentor Provide strategic input on new use case intake, prioritization, product roadmap definition, and other critical business processes Manage complex datasets continuously focusing on the consumers of the data and their business needs, while adhering to set data governance Create and maintain data product roadmaps throughout the data product life cycle with detailed specifications, requirements, and flows for data capabilities Design and maintain innovative data products, enabling data availability for data intelligence, analytics, and reporting Serve as a strategic liaison between data management, product teams, data engineering, and architecture teams throughout the data product life cycle Required Qualifications: 5+ years of data product or data management experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Experience in large enterprise data initiatives Analyze trends in data tooling, AI/ML products and competitive data strategies. Solid understanding of metadata management and data lineage. Experience using standard BI tools (Tableau, Power BI, MicroStrategy, etc) Must have knowledge on T-SQL, database, data warehousing, joins, data analysis, Indexing. Should be able to align product initiatives with business goals and customer needs. BI concepts/solutions, analytical dashboards, different type of reporting. Cloud concepts, cloud infrastructure, cloud centric solutions (data pipelines, Big Query, etc.) Agile principles, Agile ceremonies, Software Development Lifecycle, JIRA, Scrum/Kanban. Key Skills: SQL - Teradata, Snowflake, Python, Regression & Clustering, Alteryx, LLM, ETL tools, Cloud (AWS/GCP/Azure) Job Expectations: Delivery of high impact data products with measurable ROI Assess data maturity and recommend scalable solutions. Lead the life cycle of data products from concept through launch and iteration. Ensure consistent data definition and usage across systems. Collaborate with data engineers and architects to optimize data flow and storage. Successful change adoption in data workflows and product usage Identify, investigate and resolve data quality issues through root cause analysis. Reduction in time to insight for data features.

Posted 2 months ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Monitor and support Ataccama Data Quality rules execution and profiling jobs. Troubleshoot data validation, anomaly detection, and scorecard generation issues. Perform patching, software upgrades, and ensure compliance with latest platform updates. Work with business teams to resolve data integrity and governance-related incidents. Maintain SLA commitments for resolving incidents and ensuring data accuracy. Skills & Qualifications: 2-5 years of experience in data quality, governance, and metadata management. Experience with Ataccama ONE platform and knowledge of SQL for data validation.

Posted 2 months ago

Apply

7.0 - 12.0 years

13 - 17 Lacs

Mumbai

Work from Office

The Data Architect is to support the work for ensuring that systems are designed, upgraded, managed, de-commissioned and archived in compliance with data policy across the full data life cycle. This includes complying with the data strategy and undertaking the design of data models and supporting the management of metadata. The Data Architect mission will integrate a focus on GDPR law, with the contribution to the privacy impact assessment and Record of Process & Activities relating to personal Data. The scope is CIB EMEA and CIB ASIA Responsibilities Direct Responsibilities Engage with key business stakeholders to assist with establishing fundamental data governance processes Define key data quality metrics and indicators and facilitate the development and implementation of supporting standards Help to identify and deploy enterprise data best practices such as data scoping, metadata standardization, data lineage, data deduplication, mapping and transformation and business validation Structures the information in the Information System (any data modelling tool like Abacus), i.e. the way information is grouped, as well as the navigation methods and the terminology used within the Information Systems of the entity, as defined by the lead data architects. Creates and manages data models (Business Flows of Personal Data with process involved) in all their forms, including conceptual models, functional database designs, message models and others in compliance with the data framework policy Allows people to step logically through the Information System (be able to train them to use tools like Abacus) Contribute and enrich the Data Architecture framework through the material collected during analysis, projects and IT validations Update all records in Abacus collected from stakeholder interviews/ meetings. Skill Area Expected Communicating between the technical and the non-technical Is able to communicate effectively across organisational, technical and political boundaries, understanding the context. Makes complex and technical information and language simple and accessible for non- technical audiences. Is able to advocate and communicate what a team does to create trust and authenticity, and can respond to challenge. Able to effectively translate and accurately communicate across technical and non- technical stakeholders as well as facilitating discussions within a multidisciplinary team, with potentially difficult dynamics. Data Modelling (Business Flows of Data in Abacus) Produces data models and understands where to use different types of data models. Understands different tools and is able to compare between different data models. Able to reverse engineer a data model from a live system. Understands industry recognized data modelling patterns and standards. Understands the concepts and principles of data modelling and is able to produce, maintain and update relevant data models for specific business needs. Data Standards (Rules defined to manage/ maintain Data) Develops and sets data standards for an organisation. Communicates the business benefit of data standards, championing and governing those standards across the organisation. Develops data standards for a specific component. Analyses where data standards have been applied or breached and undertakes an impact analysis of that breach. Metadata Management Understands a variety of metadata management tools. Designs and maintains the appropriate metadata repositories to enable the organization to understand their data assets. Works with metadata repositories to complete and Maintains it to ensure information remains accurate and up to date. The objective is to manage own learning and contribute to domain knowledge building Turning business problems into data design Works with business and technology stakeholders to translate business problems into data designs. Creates optimal designs through iterative processes, aligning user needs with organisational objectives and system requirements. Designs data architecture by dealing with specific business problems and aligning it to enterprise-wide standards and principles. Works within the context of well understood architecture and identifies appropriate patterns. Contributing Responsibilities It is expected that the data architect applies knowledge and experience of the capability, including tools and technique and adopts those that are more appropriate for the environment. The Data Architect needs to have the knowledge of: The Functional & Application Architecture, Enterprise Architecture and Architecture rules and principles The activities Global Market and/or Global Banking Market meta-models, taxonomies and ontologies (such as FpML, CDM, ISO2022) Skill Area Expected Data Communication Uses the most appropriate medium to visualise data to tell compelling and actionable stories relevant for business goals. Presents, communicates and disseminates data appropriately and with high impact. Able to create basic visuals and presentations. Data Governance Understands data governance and how it works in relation to other organisational governance structures. Participates in or delivers the assurance of a service. Understands what data governance is required and contribute to these data governance. Data Innovation Recognises and exploits business opportunities to ensure more efficient and effective performance of organisations. Explores new ways of conducting business and organisational processes Aware of opportunities for innovation with new tools and uses of data Technical & Behavioral Competencies 1. Able to effectively translate and accurately communicate across technical and non- technical stakeholders as well as facilitating discussions within a multidisciplinary team, with potentially difficult dynamics. 2. Able to create basic visuals and presentations. 3. Experience in working with Enterprise Tools (like Abacus, informatica, big data, collibra, etc) 4. Experience in working with BI Tools (Like Power BI) 5. Good understanding of Excel (formulas and Functions) Specific Qualifications (if required) Preferred: BE/ BTech, BSc-IT, BSc-Comp, MSc-IT, MSc Comp, MCA Skills Referential Behavioural Skills : Communication skills - oral & written Ability to collaborate / Teamwork Ability to deliver / Results driven Creativity & Innovation / Problem solving Transversal Skills: Analytical Ability Ability to understand, explain and support change Ability to develop and adapt a process Ability to anticipate business / strategic evolution Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 7 years Other/Specific Qualifications (if required) 1. Experience in GDPR (General Data Protection Regulation) or in Privacy by Design would be preferred 2. DAMA Certified

Posted 2 months ago

Apply

5.0 - 8.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking for a senior level professional who combines expertise in data strategy, product management and consulting. The primary role is to guide organizations in building and scaling data driven products ensuring that data assets are effectively turned into valuable product or services In this role, you will: Lead complex data product initiatives, including those that are cross-functional with broad impact, and act as key participant in large-scale planning driving data enablement and capabilities across platforms and utilities Review and analyze complex multi-faceted, larger scale or longer-term data product initiatives that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors to drive data enablement strategies and roadmaps, while adhering to set data governance and standards Make decisions in complex and multi-faceted situations requiring solid understanding of data, analytics, and integration needs of line of business partners to inform prioritization, roadmap, and architecture design, that influence and lead broader work team to meet deliverables and drive new initiatives Strategically collaborate and consult with peers, colleagues, and mid-level to senior managers to ensure data product solutions are built for optimal performance and design analytics applications across multiple platforms, resolve data product issues, and achieve goals; may lead projects, teams or serve as a peer mentor Provide strategic input on new use case intake, prioritization, product roadmap definition, and other critical business processes Manage complex datasets continuously focusing on the consumers of the data and their business needs, while adhering to set data governance Create and maintain data product roadmaps throughout the data product life cycle with detailed specifications, requirements, and flows for data capabilities Design and maintain innovative data products, enabling data availability for data intelligence, analytics, and reporting Serve as a strategic liaison between data management, product teams, data engineering, and architecture teams throughout the data product life cycle Required Qualifications: 5+ years of data product or data management experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Experience in large enterprise data initiatives Analyze trends in data tooling, AI/ML products and competitive data strategies. Solid understanding of metadata management and data lineage. Experience using standard BI tools (Tableau, Power BI, MicroStrategy, etc) Must have knowledge on T-SQL, database, data warehousing, joins, data analysis, Indexing. Should be able to align product initiatives with business goals and customer needs. BI concepts/solutions, analytical dashboards, different type of reporting. Cloud concepts, cloud infrastructure, cloud centric solutions (data pipelines, Big Query, etc.) Agile principles, Agile ceremonies, Software Development Lifecycle, JIRA, Scrum/Kanban. Key Skills: SQL - Teradata, Snowflake, Python, Regression & Clustering, Alteryx, LLM, ETL tools, Cloud (AWS/GCP/Azure) Job Expectations: Delivery of high impact data products with measurable ROI Assess data maturity and recommend scalable solutions. Lead the life cycle of data products from concept through launch and iteration. Ensure consistent data definition and usage across systems. Collaborate with data engineers and architects to optimize data flow and storage. Successful change adoption in data workflows and product usage Identify, investigate and resolve data quality issues through root cause analysis. Reduction in time to insight for data features.

Posted 2 months ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad

Work from Office

A seasoned data professional with 5+ years in data governance, stewardship, and DAMA DMBoK practices. Strong SQL, metadata management, regulatory compliance (e.g., GDPR) stakeholder engagement skills required. CDMP certification is a plus!

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled Ab Initio Developer with over 6 years of total experience and at least 4 years of relevant experience. Your primary responsibilities will include leading the design, development, and optimization of ETL processes using Ab Initio's Graphical Development Environment (GDE). It is essential to ensure data accuracy, consistency, and availability throughout the data integration workflows. You will be tasked with building, maintaining, and optimizing data integration workflows to facilitate seamless data flow across various systems and platforms. Your expertise in designing intricate data transformations, data cleansing, and data enrichment logic within Ab Initio graphs will be critical. Utilizing Ab Initio's metadata capabilities for documenting data lineage, transformations, and data definitions is essential to ensure transparency and compliance. Monitoring and optimizing Ab Initio ETL processes for efficiency, scalability, and performance will be part of your routine. You must address and resolve any bottlenecks that may arise. Developing robust error handling and logging mechanisms to track and manage ETL job failures and exceptions is crucial to maintain the integrity of data processes. Collaboration with cross-functional teams, including data engineers, data analysts, data scientists, and business stakeholders, is necessary. Understanding requirements and ensuring successful delivery of data integration projects will be a key aspect of your role. Using version control systems such as Git to manage Ab Initio code and collaborate effectively with team members is essential. Creating and maintaining comprehensive documentation of Ab Initio graphs, data integration processes, best practices, and standards for the team is expected. You will also be responsible for investigating and resolving complex ETL-related issues, providing support to team members and users, and conducting root cause analysis when problems arise. Overall, as an Ab Initio Developer, you will be a vital part of the data engineering team, contributing to the design, development, and maintenance of data integration and ETL solutions using Ab Initio's suite of tools.,

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Key Skills Required For The Data Modeler Role Data Modeling Expertise Ability to analyze and translate business needs into long-term data models. Metadata Management Strong knowledge of metadata management and related tools. Machine Learning Experience 5-8+ years of experience in machine learning in production. Statistical Analysis Knowledge of mathematical foundations and statistical methods. Database Systems Evaluating and optimizing existing data systems. Data Flow Design Creating conceptual data models and data flows. Coding Best Practices Developing best practices for data coding to ensure consistency. System Optimization Updating and troubleshooting data systems for efficiency. Collaboration Skills Working with cross-functional teams (Product Owners, Data Scientists, Engineers, Analysts, Developers, Architects). Technical Documentation Preparing training materials, SOPs, and knowledge base articles. Communication & Presentation Strong interpersonal, communication, and presentation skills. Multi-Stakeholder Engagement Ability to work with multiple stakeholders in a multicultural environment. Data Modeling Certification Desirable but not mandatory.,

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Governance Engineer at Aviatrix, you will be playing a crucial role in establishing and maintaining our data governance framework. Your responsibilities will involve designing and implementing an efficient and scalable data governance framework to ensure data quality, security, and compliance. You will work towards promoting best practices, enhancing data transparency, and fostering seamless cross-functional collaboration within the organization. One of your key responsibilities will be to develop and enforce data quality and validation processes across various data sources, including MySQL, Snowflake, and other platforms, to maintain high standards of data reliability and usability. You will also be tasked with establishing and maintaining data cataloging practices to ensure complete data lineage across all sources. Collaboration will be a significant aspect of your role as you will be the central point of contact for data governance, working closely with teams such as Data Engineering, Product Innovation, and Security to implement effective data governance practices. Furthermore, you will collaborate with DevOps and Security teams to ensure data security and compliance with relevant regulations like GDPR and CCPA by implementing appropriate access controls and permissions. In addition to these responsibilities, you will be responsible for creating and maintaining comprehensive documentation on data governance policies, data lineage, and best practices. Proactive monitoring of data pipelines and recommending tools and technologies that support data cataloging, quality monitoring, and metadata management will also be part of your role. To excel in this position, you should have a strong knowledge of SQL and experience with data warehousing platforms like Snowflake, AWS Redshift, and MySQL. Proficiency in Python for data analysis, scripting, and automation of data processes is essential. Moreover, you should possess knowledge of data quality frameworks, experience working with large-scale data in cloud environments, and familiarity with data governance tools and metadata management platforms. Ideally, you should hold a Bachelor's degree in information systems, Computer Science, or a related field, along with at least 3 years of relevant experience in Data Governance, Data Engineering, or a similar role. Your total compensation package will be based on your job-related knowledge, education, certifications, and location, as per aligned ranges. Aviatrix is a cloud networking expert dedicated to simplifying cloud networking for enterprises. If you are passionate about making a difference, growing your career, and being part of a dynamic community, we encourage you to apply. At Aviatrix, we value diversity and welcome candidates who bring unique perspectives and skills to our team. Your journey and background matter to us, and we are committed to helping you achieve your goals and unleash your full potential.,

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Work from Office

Description: The Data & Analytics Team is seeking a Data Engineer with a hybrid skillset in data integration and application development. This role is crucial for designing, engineering, governing, and improving our entire Data Platform, which serves customers, partners, and employees through self-service access. You'll demonstrate expertise in data & metadata management, data integration, data warehousing, data quality, machine learning, and core engineering principles Requirements: • 5+ years of experience with system/data integration, development, or implementation of enterprise and/or cloud software. • Strong experience with Web APIs (RESTful and SOAP). • Strong experience setting up data warehousing solutions and associated pipelines, including ETL tools (preferably Informatica Cloud). • Demonstrated proficiency with Python. • Strong experience with data wrangling and query authoring in SQL and NoSQL environments for both structured and unstructured data. • Experience in a cloud-based computing environment, specifically GCP. • Expertise in documenting Business Requirement, Functional & Technical documentation. • Expertise in writing Unit & Functional Test Cases, Test Scripts & Run books. • Expertise in incident management systems like Jira, Service Now etc. • Working knowledge of Agile Software development methodology. • Strong organizational and troubleshooting skills with attention to detail. • Strong analytical ability, judgment, and problem analysis techniques. • Excellent interpersonal skills with the ability to work effectively in a cross-functional team. Job Responsibilities: • Lead system/data integration, development, or implementation efforts for enterprise and/or cloud software. • Design and implement data warehousing solutions and associated pipelines for internal and external data sources, including ETL processes. • Perform extensive data wrangling and author complex queries in both SQL and NoSQL environments for structured and unstructured data. • Develop and integrate applications, leveraging strong proficiency in Python and Web APIs (RESTful and SOAP). • Provide operational support for the data platform and applications, including incident management. • Create comprehensive Business Requirement, Functional, and Technical documentation. • Develop Unit & Functional Test Cases, Test Scripts, and Run Books to ensure solution quality. • Manage incidents effectively using systems like Jira, Service Now, etc. • Prepare change management packages and implementation plans for migrations across different environments. • Actively participate in Enterprise Risk Management Processes. • Work within an Agile Software Development methodology, contributing to team success. • Collaborate effectively within cross-functional teams. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Details of the role: 8 to 10 years experience as Informatica Admin (IICS) Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse. Implement best practices for data loading, ensuring optimal performance and data quality. Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes. Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements. Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements. Work on data modeling and schema design to optimize database structures for ETL processes. Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading. Troubleshoot and resolve issues related to data integration and performance bottlenecks. Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions. Provide guidance and mentorship to junior members of the data engineering team. Create and maintain comprehensive documentation for ETL processes, data models, and data flows. Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows. Use Jira for task tracking and project management. Implement data quality checks and validation processes to ensure data integrity and reliability. Maintain detailed documentation of data engineering processes and solutions. Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e.g., Informatica Cloud, Talend, Apache NiFi). Expertise in IDMC principles, including data governance, data quality, and metadata management. Solid understanding of data warehousing concepts and practices. Strong SQL skills and experience working with relational databases. Excellent problem-solving and analytical skills.

Posted 2 months ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 months ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 2 months ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement Informatica Data Quality solutions.- Collaborate with cross-functional teams to analyze and address data quality issues.- Create and maintain documentation for data quality processes.- Participate in data quality improvement initiatives.- Assist in training junior professionals in data quality best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data quality principles.- Experience with ETL processes and data integration.- Knowledge of data profiling and cleansing techniques.- Familiarity with data governance and metadata management. Additional Information:- The candidate should have a minimum of 3 years of experience in Informatica Data Quality.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Title: Data Governance & Quality Specialist Experience: 3-8 Years Location: Bangalore (Hybrid) Domain: Financial Services Notice Period: Immediate to 30 Days What Youll Do: Define and enforce data-governance policies (BCBS 239/GDPR) across credit-risk datasets Design, monitor and report on data-quality KPIs; perform profiling & root-cause analysis in SAS/SQL Collaborate with data stewards, risk teams and auditors to remediate data issues Develop governance artifacts: data-lineage maps, stewardship RACI, council presentations Must Have: 38 yrs in data-governance or data-quality roles (financial services) Advanced SAS for data profiling & reporting; strong SQL skills Hands-on with governance frameworks and regulatory requirements Excellent stakeholder-management and documentation abilities Nice to Have: Experience with Collibra, Informatica or Talend Exposure to credit-risk model inputs (PD/LGD/EAD) Automation via SAS macros or Python scripting If Interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 months ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

Bengaluru

Work from Office

Job Title: Data Governance Specialist Experience: 5-7 Years Location: Bangalore, India Domain: Financial Services Notice Period: Immediate to 30 Days Job Description: Required a skilled Data Governance Specialist to join its data management team in Bangalore. This role will focus on implementing and maintaining data governance frameworks, ensuring high-quality data assets, and enabling consistent use of metadata across the organization. Key Responsibilities: Establish and maintain data governance policies, standards, and processes. Develop and manage the enterprise data glossary and metadata repositories. Monitor and improve data quality metrics, ensuring accuracy and consistency across systems. Work closely with business and technical teams to ensure data lineage and traceability. Support Agile delivery using tools like JIRA and Confluence. Collaborate across departments to promote data stewardship and governance awareness. Key Requirements: 57 years of experience in data governance, metadata management, or data quality roles. Strong knowledge of data glossary, lineage, and metadata practices. Experience working in Agile environments; familiarity with JIRA and Confluence. Excellent communication and stakeholder management skills. Prior experience in the financial services or banking domain is preferred. Preferred Skills: Knowledge of data governance tools (e.g., Collibra, Informatica, Alation) is a plus. Understanding of regulatory data requirements (e.g., BCBS 239, GDPR) is an advantage. Intake call Notes: Data governance, Data Glossary, metadata management, data quality, agile, JIRA, confluence Keywords - data governance, data quality and agile If interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 months ago

Apply

8.0 - 13.0 years

19 - 25 Lacs

Bengaluru

Work from Office

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using Big Query, Click house, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hands-on experience with Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data GovernanceManage source data within the Metadata Hub and Data Catalog. ETL DevelopmentDevelop and execute data processing graphs using Express It and the Co-Operating System. ETL OptimizationDebug and optimize data processing graphs using the Graphical Development Environment (GDE). API IntegrationLeverage Ab Initio APIs for metadata and graph artifact management. CI/CD ImplementationImplement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & MentorshipMentor team members and foster best practices in Ab Initio development and deployment.

Posted 2 months ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Mumbai

Work from Office

Subject matter experts in Marketing and Comms provide business stakeholders with specialized advice on their subjects, and act as an advisor leveraging on a specific MC expertise. She/he is a person with in-depth, unique knowledge and expertise on a specific subject or in a particular industry ex digital marketing, internal comms, telecom, etc. : Familiarity with metadata management and tagging best practices. Exceptional attention to detail, with a strong ability to spot errors and inconsistencies in large datasets or digital assets. Strong analytical skills with the ability to identify data quality issues and root causes and implement corrective actions. Ability to work effectively with cross-functional teams, including marketing, creative, IT, and product teams, to resolve data issues and ensure alignment across the organization. Strong problem-solving skills to address data discrepancies, identify issues within workflows, and propose effective solutions. Proven track record of optimizing data management processes, improving workflows, and implementing data quality initiatives. Primary Skills: 4-6 years of experience in digital asset management, with a focus on maintaining data accuracy and consistency across systems. 2+ years Sitecore/Aprimo/AEM OR Veeva any one Digital Asset Management tools. Secondary Skills: Familiarity with data validation tools, reporting platforms (e.g., Excel, Power BI), and basic SQL or query languages for managing and analyzing data. Excellent written and verbal communication skills, with the ability to document processes, provide training, and explain data issues clearly to both technical and non-technical stakeholders.

Posted 2 months ago

Apply

5.0 - 6.0 years

8 - 10 Lacs

Mumbai

Work from Office

We are seeking a skilled SAS Administrator with at least 5 years of experience in managing SAS platforms, including installation, configuration, and administration. The ideal candidate should have hands-on expertise in SAS Viya 3.4, SAS Viya 3.5, SAS Viya 4, SAS Management Console (SMC), and server-level configurations. Experience working in government or large enterprise environments is preferred. Key Responsibilities: Perform installation, configuration, and maintenance of SAS 9.x and SAS Viya 4 on Linux/Unix server environments. Manage SAS Environment Manager, SAS Management Console (SMC), metadata administration, and user/group/role permissions. Monitor and tune system performance, ensuring platform availability and integrity. Administer SAS server security, backups, and patching. Collaborate with IT teams to troubleshoot server-level or configuration issues. Perform regular upgrades, migrations, and license updates. Coordinate with SAS Tech Support for critical issues or escalations. Prepare and maintain technical documentation and SOPs. Required Skills: Minimum 5 years of hands-on experience in SAS Platform Administration. Strong experience in SAS Viya 4 administration and traditional SAS (9.x). Good knowledge of SAS SMC, metadata management, and server architecture. Experience in installation/configuration on Linux/Unix environments. Familiar with security setup, resource management, and system health monitoring. Knowledge of shell scripting is a plus. Preferred Qualifications: BE / Btech / Mtech / MCA / MSc - Stats. Prior experience working with government or public sector clients is a plus. SAS certifications (e.g., SAS Certified Platform Administrator) are a bonus

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies