Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
9 - 14 Lacs
pune
Work from Office
Title and Summary Business Analyst II (Hyperion/Essbase) The Business Analyst, MyMPA will be part of GBSCs Automation & Engineering Team, responsible for developing and implementing end-to-end workflows that will generate datasets crucial to the delivery and support of ongoing/future reporting and analytics projects. This role will also work closely with the VP of Analytics & Metrics and Director of FP&A and gather requirements for new datasets to support and strengthen the continuous evolution of reporting requirements and contribute to our reporting platform's success as it grows to support the rapidly expanding Mastercard business. The ideal candidate will have hands-on development skills combined with an ability to analyze and understand end user requirements that are critical success factors within this role. This role requires the skills and desire to work as an individual contributor as well as collaborate cross functionally with various business constituents. 1. Have you ever worked on an enterprise-wide reporting solution that relied heavily on your own knowledge and resources to build and maintain the solution? 2. Are you constantly hungry to learn? Do you have the growth mindset as opposed to the fixed mindset? 3. Do you love working with people, helping them, and turning their requirements into something that can make a difference? Role: Hyperion/Essbase (ASO/BSO) knowledge and experience is necessary. Proficiency in writing and debugging SQL queries that can be used to extract datasets from Data Warehouse and other Relational databases. Nice to have, if skilled at developing workflows and macros in Alteryx to handle data transformations and for applying business rules. Use Alteryx to build high-level and detailed validations to ensure data quality and uphold a high degree of user confidence in our datasets. Use MS-Excel and MS-PPT to capture findings and present to customers in an easy-to-understand and impactful manner Organize and participate in discussions with customers to brainstorm on data quality issues and contribute to discussions to devise business rules to address data quality issues Work closely with Essbase developers to align the structure of the data feeds to the dimensionality of the application. Assist Essbase development team in testing the data loaded in the application and flag issues while performing sanity checks on the data. Partner with customers during the UAT phase and work with the Development team to fix dataapplication issues identified during UAT Be a Level 2 resource that is comfortable diving deep into data and process flows/breaks. Liaison with the internal groups in MasterCard Operations and Technology to ensure our solutions remain in compliance with MasterCard technical standards. Navigate O&T requirements around change management and new development. The candidate should have strong metadata management, data transformation/manipulation and reporting skills, preferably in an FP&A environment. A prior FP&A experience is desirable since the individual will be closely working with stakeholders from the FP&A organization and an understanding of the data in an FP&A context will make engagement with stakeholders more productive and easier. In terms of hard skills, we are looking for individuals who are proficient in Alteryx, SQL, Power BI and Advanced MS-Excel along with Project Management experience. All About You: Strong understanding of Windows and Linux server. Good understanding of SQL Server or Oracle DB. Strong commitment to development, testing and ensuring quality of data workflows you will be developing. Strong ability to step in and debug/analyze the workflow of others on the team. Be able to work within an Agile environment that is highly responsive to the business. Our team is part of the Finance organization you must be comfortable with working as part of the business with a strong roll up your sleeves mentality.
Posted 3 weeks ago
9.0 - 14.0 years
27 - 42 Lacs
bengaluru
Work from Office
Job Summary We are looking for an experienced Data Governance Architect with deep expertise in Alation and Azure cloud platforms. This role involves partnering with senior stakeholders to define and champion an enterprise data catalog and dictionary strategy, oversee the entire lifecycle of the data catalog from establishing metadata standards and initial MVPs to executing full-scale enterprise rollouts. Pre-requisites •10+ years of experience in data governance Proven expertise in Alation tool on Azure platform Understanding of the Snowflake platform Proven expertise in at least two areas: Data Governance Operating Models, Metadata Management, Data Cataloging, Data Lineage or Data Quality Deep understanding of governance frameworks such as DAMA or DCAM, with practical implementation experience. Strong capability in conducting maturity assessments, gap analyses, and delivering actionable strategic roadmaps. Excellent communication skills, with the ability to articulate complex topics clearly and deliver precise documentation. Key Responsibilities Assess current cataloging and dictionary capabilities, identify gaps, and develop actionable roadmaps to enrich metadata quality, accelerate catalog population, and drive adoption. Identify different data personas using the data catalog and design persona specific playbooks to promote adoption. Design, deploy, and manage scalable data catalog and dictionary solutions using platforms like Alation Understanding of the leading Data Governance tools like Collibra, Purview Oversee the entire lifecycle of the data catalog from establishing metadata standards and initial MVPs to executing full-scale enterprise rollouts. Define architecture and best practices for metadata management to ensure consistency, scalability, and sustainability of the catalog and dictionary. Identify and catalog critical data elements by capturing clear business terms, glossaries, KPIs, lineage, and persona-specific guides to build a trusted, comprehensive data dictionary. Develop and enforce policies to maintain metadata quality, manage access, and protect sensitive information within the catalog. Implement robust processes for catalog population including automated metadata ingestion, leveraging APIs, glossary management, lineage tracking, and data classification. Develop workflow management approach to notify of changes to certified catalog content to stewards. Create reusable frameworks and templates for data definitions and best practices to streamline catalog adoption across teams.
Posted 3 weeks ago
8.0 - 10.0 years
22 - 27 Lacs
bengaluru
Work from Office
Role Purpose The purpose of the role is to define and develop Enterprise Data Structure along with Data Warehouse, Master Data, Integration and transaction processing with maintaining and strengthening the modelling standards and business information. Do 1. Define and Develop Data Architecture that aids organization and clients in new/ existing deals a. Partnering with business leadership (adopting the rationalization of the data value chain) to provide strategic, information-based recommendations to maximize the value of data and information assets, and protect the organization from disruptions while also embracing innovation b. Assess the benefits and risks of data by using tools such as business capability models to create an data-centric view to quickly visualize what data matters most to the organization, based on the defined business strategy c. Create data strategy and road maps for the Reference Data Architecture as required by the clients d. Engage all the stakeholders to implement data governance models and ensure that the implementation is done based on every change request e. Ensure that the data storage and database technologies are supported by the data management and infrastructure of the enterprise f. Develop, communicate, support and monitor compliance with Data Modelling standards g. Oversee and monitor all frameworks to manage data across organization h. Provide insights for database storage and platform for ease of use and least manual work i. Collaborate with vendors to ensure integrity, objectives and system configuration j. Collaborate with functional & technical teams and clients to understand the implications of data architecture and maximize the value of information across the organization k. Presenting data repository, objects, source systems along with data scenarios for the front end and back end usage l. Define high-level data migration plans to transition the data from source to target system/ application addressing the gaps between the current and future state, typically in sync with the IT budgeting or other capital planning processes m. Knowledge of all the Data service provider platforms and ensure end to end view. n. Oversight all the data standards/ reference/ papers for proper governance o. Promote, guard and guide the organization towards common semantics and the proper use of metadata p. Collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing such data throughout an organization to ensure a common understanding, consistency, accuracy and control q. Provide solution of RFPs received from clients and ensure overall implementation assurance i. Develop a direction to manage the portfolio of all the databases including systems, shared infrastructure services in order to better match business outcome objectives ii. Analyse technology environment, enterprise specifics, client requirements to set a collaboration solution for the big/small data iii. Provide technical leadership to the implementation of custom solutions through thoughtful use of modern technology iv. Define and understand current issues and problems and identify improvements v. Evaluate and recommend solutions to integrate with overall technology ecosystem keeping consistency throughout vi. Understand the root cause problem in integrating business and product units vii. Validate the solution/ prototype from technology, cost structure and customer differentiation point of view viii. Collaborating with sales and delivery leadership teams to identify future needs and requirements ix. Tracks industry and application trends and relates these to planning current and future IT needs 2. Building enterprise technology environment for data architecture management a. Develop, maintain and implement standard patterns for data layers, data stores, data hub & lake and data management processes b. Evaluate all the implemented systems to determine their viability in terms of cost effectiveness c. Collect all the structural and non-structural data from different places integrate all the data in one database form d. Work through every stage of data processing: analysing, creating, physical data model designs, solutions and reports e. Build the enterprise conceptual and logical data models for analytics, operational and data mart structures in accordance with industry best practices f. Implement the best security practices across all the data bases based on the accessibility and technology g. Strong understanding of activities within primary discipline such as Master Data Management (MDM), Metadata Management and Data Governance (DG) h. Demonstrate strong experience in Conceptual, Logical and physical database architectures, design patterns, best practices and programming techniques around relational data modelling and data integration 3. Enable Delivery Teams by providing optimal delivery solutions/ frameworks a. Build and maintain relationships with delivery and practice leadership teams and other key stakeholders to become a trusted advisor b. Define database physical structure, functional capabilities, security, back-up and recovery specifications c. Develops and establishes relevant technical, business process and overall support metrics (KPI/SLA) to drive results d. Monitor system capabilities and performance by performing tests and configurations e. Integrate new solutions and troubleshoot previously occurred errors f. Manages multiple projects and accurately reports the status of all major assignments while adhering to all project management standards g. Identify technical, process, structural risks and prepare a risk mitigation plan for all the projects h. Ensure quality assurance of all the architecture or design decisions and provides technical mitigation support to the delivery teams i. Recommend tools for reuse, automation for improved productivity and reduced cycle times j. Help the support and integration team for better efficiency and client experience for ease of use by using AI methods. k. Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams l. Ensures architecture principles and standards are consistently applied to all the projects m. Ensure optimal Client Engagement i. Support pre-sales team while presenting the entire solution design and its principles to the client ii. Negotiate, manage and coordinate with the client teams to ensure all requirements are met iii. Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor Mandatory Skills: Data Governance. Experience : 8-10 Years.
Posted 3 weeks ago
5.0 - 7.0 years
5 - 8 Lacs
indore, hyderabad, ahmedabad
Work from Office
Collibra Expert Data Governance (Onsite) Locations: Hyderabad, Indore, Ahmedabad (India) Position Type: Full-time / Onsite Immediate Requirement Role We are seeking a highly skilled Collibra Expert to lead enterprise-level data governance initiatives. The ideal candidate must have strong hands-on Collibra expertise, including configuration, workflow development, integration, and stakeholder engagement, with proven experience in implementing governance frameworks across large organizations. Key Responsibilities Lead end-to-end implementation & administration of the Collibra Data Intelligence Platform. Design & configure Collibra Operating Models (domains, assets, workflows, roles). Develop & maintain custom workflows using BPMN & Collibra Workflow Designer. Integrate Collibra with Snowflake, Informatica, Tableau, Azure, SAP via APIs & connectors. Define & enforce data governance policies with stewards, owners & business teams. Implement & monitor data quality, lineage & metadata management. Act as Collibra SME & evangelist, driving data governance maturity. Provide training & support to technical and business users. Maintain documentation & ensure compliance with governance standards. Required Skills 10+ years in data governance, metadata management, or data quality. 5+ years hands-on Collibra experience (configuration, workflows, integrations). Proficiency with Collibra APIs, BPMN, Groovy, JavaScript. Experience with data cataloging, lineage & business glossary in Collibra. Familiarity with Snowflake, Azure, AWS, Informatica or similar platforms. Strong communication & stakeholder management skills. Preferred Skills Collibra Ranger / Solution Architect certification. Enterprise-level Collibra deployments experience. Knowledge of regulatory compliance (GDPR, HIPAA, CCPA). Background in data architecture / data engineering. Soft Skills Strong leadership & stakeholder collaboration. Excellent problem-solving & analytical mindset. Ability to mentor teams & evangelize data governance practices. Immediate Requirement Candidates must be available for onsite work at Hyderabad, Indore, Ahmedabad with immediate availability. Resume Submission Please share resumes with full details including: Current CTC Expected CTC Notice Period / Immediate Availability Current Location Preferred Job Location Send profiles to: navaneetha@suzva.com
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
hyderabad
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Informatica MDM Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the decision-making process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica MDM.- Strong understanding of data integration and data quality processes.- Experience with data modeling and metadata management.- Familiarity with ETL processes and data warehousing concepts.- Ability to troubleshoot and resolve technical issues efficiently. Additional Information:- The candidate should have minimum 7.5 years of experience in Informatica MDM.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
15 - 19 Lacs
hyderabad
Work from Office
About The Role Project Role : Packaged/SaaS Application Architect Project Role Description : Design scalable, secure, and cost-efficient SaaS solutions to align with enterprise architecture. Apply SaaS principles such as multi-tenancy and modularity, define customization limits, and guide platform use to ensure performance and maintainability. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Packaged/SaaS Application Architect, you will be responsible for designing scalable, secure, and cost-efficient Software as a Service solutions that align with the overarching enterprise architecture. Your typical day will involve applying SaaS principles such as multi-tenancy and modularity, defining customization limits, and guiding platform use to ensure optimal performance and maintainability. You will collaborate with various teams to ensure that the solutions meet both technical and business requirements, while also addressing any challenges that arise during the development process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate the performance of the SaaS solutions to ensure they meet established standards. Professional & Technical Skills: - Data Architecture and Modelling:Designing and implementing scalable MDM architectures (Profisee), including data layer design, metadata management, data storage and lifecycle, analytics and ML integrations, data security and semantic models within MS Fabric.- Lakehouse Architecture:Implementing modern data Lakehouse architectures, potentially utilizing Delta Lake for data versioning, ACID transactions, and schema enforcement.- Microsoft Fabric:Deep understanding of Fabric's core components like OneLake, Synapse Data Engineering, Synapse Data Warehousing, Semantic model and Power BI.- Data Governance:Experience Implementing Data Quality Frameworks with Purview for data governance, including data cataloguing, data loss prevention, and auditing.- Programming Languages:Strong skills in SQL and Python for data manipulation, analysis & optimization.- Data Integration:Experience with various data integration patterns and tools, (ETL) using Fabric data pipelines / Data Flow Gen2 and monitoring tools for performance- Documentation all design standards and processes.- Communication and Collaboration:Excellent communication and collaboration skills to work effectively with stakeholders and developers. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 25.0 years
14 - 18 Lacs
pune
Work from Office
Position Overview We are seeking an expert Senior Data & AI Architect to design and implement SaaS capabilities (independent and modular services) for addressing needs from various products which can be desktop application or single tenant application or multi-tenant application. The ideal candidate will have extensive experience architecting scalable, resilient & secured Data & AI solutions and working with cloud-native technologies in high-volume environments. Key Responsibilities: Design and architect integration patterns for multi-tenant data and AI platform with IAM and API Gateway solutions Build scalable, resilient data architectures leveraging AWS cloud services (AWS cloud ismandatory ; other hyperscalar like Azure, Google will be an added advantage) Develop technical architecture specifications for Python & Java based microservices Ability to break a complex architecture in smaller chunks which can be delivered iteratively keeping the focus on long term vision Design data lake, data warehouse and lakehouse solutions with optimized query capabilities Establish ETL/ELT pipelines for efficient data processing Build and implement metadata management strategies Develop reference architectures and design patterns for complex systems ( e.g. timeseries data processing) Build rapid proof-of-concept solutions to validate architectural decisions Collaborate with multi-functional teams to ensure seamless integration Guide and mentor development teams on architectural standard methodologies and technology adoption Conduct architectural reviews, technical design discussions, and code/design reviews Required Skills & Experience : 15-25 years of technology experience with at least 8+ years in architect roles Proven experience designing high-scale, resilient and secured data platforms & big data technologies handling massive volumes of data and API calls Deep expertise with AWS cloud services, particularly in data processing (S3, Athena, Glue, EMR) and AI services ( sagemaker , bedrock etc.) Lead the design, development, and governance of our technology architecture Strong background in Java microservice architecture patterns and standard processes Experience integrating with enterprise IAM solutions and API gateways Hands-on knowledge of data lake, data warehouse or lakehouse design, implementation andoptimization Experience with ETL/ELT tools and methodologies Proficiency in crafting metadata management systems Strong background in building complex systems ( e.g. timeseries data processing and storage) Ability to rapidly prototype solutions and validate architectural concepts Preferred Qualifications: Python & Java programming experience; Go lang will be an added advantage AWS certification (Associate/Professional Architect) Knowledge of Retrieval Augmented Generation (RAG) and agentic AI systems Experience with GenAI technologies and integration patterns Familiarity with ML Ops practices Experience with real-time data processing frameworks Cloud architecture certifications (AWS Solutions Architect or similar) Familiarity with security best practices, such as zero-trust architecture and secure API design,TOGAF etc. Education: Masters degree in Computer Science , Software Engineering, BigData , AI/ ML,Data Engineering or a related field.
Posted 3 weeks ago
10.0 - 15.0 years
7 - 11 Lacs
bengaluru
Work from Office
SymphonyAI is a global leader in AI-driven enterprise applications, transforming industries with cutting-edge artificial intelligence and machine learning solutions. We empower organizations across retail, CPG, financial services, manufacturing, media, enterprise IT and the public sector by delivering data-driven insights that drive business value. Headquartered in Palo Alto, California, SymphonyAI has a wide range of products and a strong global presence, with operations in North America, Southeast Asia, the Middle East, and India. The company is dedicated to fostering a high-performance culture and maintaining its position as one of the largest and fastest-growing AI portfolios in the industry. Job Description About the Role We are looking for a Data Warehouse Engineer with strong expertise across the Azure Data Platform to design, build, and maintain modern data warehouse and analytics solutions. This role requires hands-on experience with Azure Synapse Analytics, Data Factory, Data Lake, Azure Analysis Services, and Power BI . The ideal candidate will ensure seamless data ingestion, storage, transformation, analysis, and visualization, enabling the business to make data-driven decisions. Key Responsibilities Data Ingestion & Orchestration 10-15years of experience in designing and building scalable ingestion pipelines using Azure Data Factory . Integrate data from multiple sources (SQL Server, relational databases, Azure SQL DB, Cosmos DB, Table Storage). Manage batch and real-time ingestion into Azure Data Lake Storage . Data Storage & Modelling Develop and optimize data warehouse solutions in Azure Synapse Analytics . Implement robust ETL/ELT processes to ensure data quality and consistency. Create data models for analytical and reporting needs. Data Analysis & Security Build semantic data models using Azure Analysis Services for enterprise reporting. Collaborate with BI teams to deliver well-structured datasets for reporting in Power BI . Implement Azure Active Directory for authentication, access control, and security best practices. Visualization & Business Support Support business teams in building insightful Power BI dashboards and reports . Translate business requirements into scalable and optimized BI solutions. Provide data-driven insights in a clear, business-friendly manner. Optimization & Governance Monitor system performance and optimize pipelines for efficiency and cost control. Establish standards for data governance, data quality, and metadata management . Qualifications & Skills Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related field. Proven experience as a Data Warehouse Engineer / Data Engineer with strong expertise in: Azure Synapse Analytics Azure Data Factory Azure Data Lake Storage Azure Analysis Services Azure SQL Database / SQL Server Power BI (reporting & dashboarding) Strong proficiency in SQL and data modelling (star schema, snowflake schema, dimensional modelling). Knowledge of Azure Active Directory for authentication & role-based access control. Excellent problem-solving skills and ability to optimize large-scale data solutions. Strong communication skills to collaborate effectively with both technical and business stakeholders.
Posted 3 weeks ago
8.0 - 13.0 years
14 - 18 Lacs
hyderabad
Work from Office
We are seeking a strategic and technically strong Enterprise Data Architect to design and lead the implementation of scalable, secure, and high-performing data architecture solutions across the organization. The ideal candidate will have deep experience with modern data platforms, including Snowflake, DBT, SnapLogic, and cloud-native technologies. This role requires a balance of technical expertise, architectural vision, and business acumen to align data solutions with enterprise goals. Key Responsibilities: Define and maintain the organization's enterprise data architecture strategy, including data modeling, governance, and integration standards. Lead the design and architecture of enterprise-grade data platforms using Snowflake, DBT, SnapLogic, and Azure Data Factory. Oversee the development of robust, scalable, and secure data pipelines across a hybrid cloud environment. Architect and optimize SQL Server and PostgreSQL environments to ensure availability, performance, and scalability. Define and enforce integration patterns to ensure data consistency, accuracy, and reliability across systems. Guide the design of efficient ETL/ELT frameworks to ensure alignment with data warehousing and business intelligence requirements. Partner with business and technical teams, including data engineers, analysts, and stakeholders, to define and enforce data governance and metadata management practices. Review and guide SQL query performance tuning, indexing strategies, and system monitoring. Provide direction on the use of Python for data automation, orchestration, and advanced transformations. Establish and maintain enterprise-wide documentation for data flows, data dictionaries, and architectural decisions. Technical Skills & Experience: 8+ years of progressive experience in data engineering or architecture roles, with 2-3 years in a lead or architect capacity. Proven experience designing and implementing data architectures using Snowflake , DBT , SnapLogic , and Azure Data Factory . Strong proficiency in SQL and performance tuning across large-scale environments. Deep experience with SQL Server and PostgreSQL administration and architecture. Experience with Python for scripting, data processing, and orchestration tasks. Solid understanding of data governance , security , compliance , and data lifecycle management . Experience leading data modernization initiatives in cloud/hybrid environments. Understanding of metadata management, master data management, and data lineage tools is a plus. Soft Skills: Strategic mindset with excellent analytical and problem-solving skills. Strong leadership and communication abilities, capable of influencing stakeholders across business and technical domains. Ability to translate business requirements into scalable and sustainable technical solutions. Team-oriented with a collaborative approach to cross-functional projects. Preferred Qualifications: Bachelors or masters degree in computer science, Data Engineering, or a related field. Relevant certifications (e.g., Snowflake Architect , Azure Solutions Architect , DBT Certification ) are highly desirable.
Posted 3 weeks ago
12.0 - 20.0 years
35 - 60 Lacs
bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you passionate about delivering exceptional service and revolutionizing the world of technology? We have an incredible opportunity for a talented individual to join our dynamic team as a Data Architect. In this customer-centric role, you will play a pivotal role in ensuring our customers receive top-notch service within a contractual framework. As a visionary leader, you will inspire and guide our team of experts to deliver high-quality and reliable information technology services. Working closely with the latest systems, software products, and networked devices, you will align our solutions perfectly with our customers' evolving business needs. Your deep knowledge of the services we provide paired with your understanding of customer businesses, will enable you to propose and implement tailored solutions that exceed their expectations. You will be an integrated part of our customer account structure, fostering strong relationships with our customers and collaborating closely with our Delivery Partner. Together, you will create an environment that promotes innovation, collaboration, and customer success. By owning the technical and managerial support for our field engineers, technicians, system administrators, subject matter experts, and product support personnel, you will empower them to deliver, manage, maintain, and deploy IT services effectively. When it comes to troubleshooting incidents, problems, changes, and escalations, you will be at the forefront, providing swift support to fix any issues that may arise in malfunctioning services, operations, software, or equipment. Your expertise will be crucial in ensuring that our systems run smoothly, offering our customers a seamless experience. As a Data Architect, you will have the unique opportunity to collaborate with internal stakeholders and client. Together, you will co-create, design, deploy, and maintain reliable, available, and future-proof systems and services. Your innovative ideas and leadership skills will play a vital role in shaping the technological landscape of our organization and the industry as a whole. If you are ready to make an impact, drive customer success, and be at the forefront of technological advancements, this is the role for you. Join our team and be part of an exhilarating journey as we reshape the IT services landscape with creativity, passion, and excellence. Your Future at Kyndryl Kyndryl has a global footprint, which means that as a Data Architect at Kyndryl you will have opportunities to work on projects and collaborate with colleagues from around the world. This role is dynamic and influential – offering a wide range of professional and personal growth opportunities that you won’t find anywhere else. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. We are seeking an experienced Cloud Data Architect with strong expertise in Databricks to lead and deliver end-to-end data architecture solutions across cloud-based delivery projects. The ideal candidate will have deep experience with cloud data ecosystems (Azure, AWS, or GCP), data engineering, and hands-on delivery using Databricks for data transformation, advanced analytics, and AI/ML use cases . Key Responsibilities: Own the architecture and design of modern data platforms using Databricks on cloud (Azure/AWS/GCP). Develop scalable, high-performance data lakehouse architectures , incorporating Delta Lake and structured streaming. Architect robust data pipelines (ETL/ELT) using PySpark, SparkSQL, and Databricks notebooks. Work closely with business stakeholders, project managers, and engineering teams to understand data requirements and translate them into technical architecture. Optimize data processing workflows for performance, scalability, and cost-efficiency. Implement best practices around data governance, security, lineage , and compliance in cloud-native environments. Enable analytics, BI, and ML initiatives by ensuring strong data foundations. Provide technical leadership and mentor junior engineers and data modelers. Drive innovation and continuously evaluate emerging technologies in the data and AI space. Required Skills & Experience: 10+ years of experience in data architecture, engineering, or analytics roles. Strong hands-on experience with Databricks including Delta Lake , Spark , and MLflow . Expertise in cloud platforms (preferably Azure , but AWS/GCP also considered) and native cloud data services like Azure Data Lake Storage, Synapse, AWS S3, Redshift, or GCP BigQuery. Strong command of SQL, PySpark, SparkSQL , and distributed data processing. Solid experience in data modeling (dimensional and normalized), metadata management, and data quality frameworks. Strong understanding of DevOps/DataOps , CI/CD pipelines, and infrastructure-as-code (e.g., Terraform). Experience implementing data security, RBAC, and encryption in cloud environments. Excellent stakeholder communication, leadership, and documentation skills. Preferred Skills and Experience: Certification in Databricks (e.g., Databricks Certified Data Engineer Professional or Lakehouse Fundamentals). Cloud certifications (Azure Solutions/Data Engineer, AWS Data Analytics, GCP Data Engineer). Experience with machine learning workflows , feature stores , or real-time analytics . Familiarity with data cataloging tools like Unity Catalog, Purview, or Collibra. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
punjab
On-site
As a member of Americana Restaurants International PLC's Center of Excellence in Mohali, India, you will be part of a pioneering force in the MENA region and Kazakhstan's Out-of-Home Dining industry. With over 60 years of experience, our organization is a leader in operating Quick Service Restaurants (QSR) and casual dining establishments with a vast portfolio of iconic global brands. Our network of 2,600+ restaurants across 12 countries in the Middle East, North Africa, and Kazakhstan is driven by a team of over 40,000+ talented individuals dedicated to delivering exceptional food, superior service, and memorable experiences. The Center of Excellence in Mohali is crucial for product development, IT, Digital, AI, and Analytics, as well as implementing global IT best practices to drive digital transformation. In this role, you will be responsible for designing, implementing, and supporting Oracle EPM Cloud solutions for planning, budgeting, forecasting, and consolidation. Your contributions will directly impact financial visibility, accuracy, and decision-making processes across our multi-country operations. Key Responsibilities: - Configure and deploy EPBCS modules such as Workforce, Financials, Capex, Projects, and Custom - Implement and maintain FCCS for close, consolidation, and reporting purposes - Develop business rules, forms, dashboards, and Smart View templates - Manage data integration using Data Management / EPM Integration Agent - Collaborate with finance teams to support budgeting and closing cycles - Conduct performance tuning, troubleshooting, and user support - Ensure metadata, security, and workflow alignment with best practices - Explore automation possibilities with EPM Automate and REST APIs What You Bring: - 5-7 years of hands-on experience with Oracle EPM Cloud - At least 2+ years of experience in EPBCS and FCCS - Deep understanding of FP&A, consolidation, intercompany eliminations, and currency translations - Proficiency in Calculation Manager / Business Rules, Smart View / Web Studio Reporting, Data Management / Integration Agent - Familiarity with Oracle EPM Automate, REST APIs, and metadata management - Excellent collaboration and communication skills Education & Qualifications: - Bachelor's degree in accounting, Finance, Computer Science, or a related field - Oracle EPM certifications (EPBCS and/or FCCS) preferred - Knowledge of additional modules like ARCS, TRCS, Narrative Reporting is a plus.,
Posted 4 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Associate in the Data, Analytics & Specialist Managed Service Tower at PwC, with 6 to 10 years of experience, you will be part of a team of problem solvers addressing complex business issues from strategy to execution. Your responsibilities at this management level include utilizing feedback and reflection for personal development, being flexible in stretch opportunities, and demonstrating critical thinking skills to solve unstructured problems. You will also be involved in ticket quality reviews, project status reporting, and ensuring adherence to SLAs and incident, change, and problem management processes. Seeking diverse opportunities, communicating effectively, upholding ethical standards, demonstrating leadership capabilities, and collaborating in a team environment are essential aspects of this role. You will also be expected to contribute to cross competency work, COE activities, and manage escalations and risks. As a Senior Azure Cloud Engineer, you are required to have a minimum of 6 years of hands-on experience in building advanced Data warehousing solutions on leading cloud platforms, along with 3-5 years of Operate/Managed Services/Production Support Experience. Your responsibilities will include designing scalable and secure data structures, developing data pipelines for downstream consumption, and implementing ETL processes using tools like Informatica, Talend, SSIS, AWS, Azure, Spark, SQL, and Python. Experience with data analytics tools, data governance solutions, ITIL processes, and strong communication and problem-solving skills are essential for this role. Knowledge of Azure Data Factory, Azure SQL Database, Azure Data Lake, Azure Blob Storage, Azure Databricks, Azure Synapse Analytics, and Apache Spark is also required. Additionally, experience in data validation, cleansing, security, and privacy measures, as well as SQL querying, data governance, and performance tuning are essential. Nice to have qualifications for this role include Azure certification. Managed Services- Data, Analytics & Insights Managed Service at PwC focuses on providing integrated services and solutions to clients, enabling them to optimize operations and accelerate outcomes through technology and human-enabled experiences. The team emphasizes a consultative approach to operations, leveraging industry insights and talent to drive transformational journeys and sustained client outcomes. As a member of the Data, Analytics & Insights Managed Service team, you will be involved in critical offerings, help desk support, enhancement, optimization work, and strategic advisory engagements. Your role will require a mix of technical expertise and relationship management skills to support customer engagements effectively.,
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a highly skilled Data Governance Developer with expertise in Microsoft Purview and metadata management, you will be responsible for supporting enterprise-wide data governance initiatives. Your role will involve implementing scalable data governance solutions, particularly in the investment banking or financial services domain, and ensuring the enforcement of data policies, standards, and lineage across cloud environments. You will lead and support the complete implementation of Microsoft Purview, including tasks such as configuration, scanning, and API integrations. Additionally, you will design and manage enterprise business glossaries, data classifications, metadata lineage, and technical/business metadata association. Your responsibilities will also include configuring and automating metadata ingestion pipelines from various sources in Azure, AWS, and GCP environments. Collaboration with data stewards, architects, and platform teams will be essential to define and maintain data governance standards and processes. You will be expected to ensure compliance with cloud security, regulatory standards, and data governance frameworks. Monitoring metadata quality, lineage visibility, and data catalog health through regular audits and improvements will also be part of your duties. Furthermore, you will provide technical guidance on Purview APIs and their integration with AI/ML data pipelines. Your contribution to data governance frameworks in Agile project delivery and DevOps cultures will be crucial for the success of the data governance initiatives. To be successful in this role, you should have at least 5 years of experience in data governance, metadata management, or data cataloging in enterprise environments. You must possess a minimum of 12 years of hands-on experience with Microsoft Purview, including at least one end-to-end implementation. Proficiency in Purview API usage, resource scanning, classification, and metadata mapping is required. A strong understanding of data lineage, data dictionaries, and glossary management is essential. Previous experience in investment banking, capital markets, or regulated financial environments would be highly advantageous. Familiarity with cloud platforms such as Azure, AWS, and GCP, especially in data services and governance controls, is preferred. Knowledge of AI/ML-enabled metadata automation and intelligent classification is considered a plus. Excellent interpersonal and communication skills are necessary to engage with both technical and business stakeholders effectively. Holding Microsoft certifications in Azure Data Engineer, Security Engineer, or Purview will be a strong advantage in this role.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position at our organization involves contributing to the establishment and implementation of new or updated application systems and programs in collaboration with the Technology team. Your role will primarily focus on applications systems analysis and programming activities. **Basic Qualifications:** - Minimum of 5+ years of experience in application Ab-initio Metadata hub development. - Strong understanding of Data Lineage, metadata management, reference data development, and data analytics. - Proficiency in relational databases such as Oracle, SQL, PL/SQL. - Strong knowledge in areas like Data lineage, application development, and experience in Python or Java coding. - Hands-on experience with coding languages and tool-based configurations. - Full Software Development Kit (SDK) development cycle experience. - Problem-solving skills and ability to work independently or as part of a team. - Proficiency in ab-initio mHub or Python programming languages. - Proficiency in at least one of the following programming languages: Java, API, Python. - Passion for development, strong work ethic, and dedication to continuous learning. - Experience with code optimization techniques for different hardware architectures. **Preferred Qualifications:** - Bachelor's degree in computer science or related field. - Experience in relational databases such as SQL, PL/SQL, Oracle, etc. - Experience with code development, metadata management, reference data, and Lineage tool. - Experience in developing data lineage using tools or custom code. - Experience in Data management and coding languages. **Responsibilities:** - Develop and maintain application development for complex enterprise data lineage. - Optimize industry-based tools to simplify enterprise-level data complexity via data lineage. - Debug and resolve graph-related issues. - Collaborate on designing and implementing new features to simplify complex problems. - Conduct code reviews for quality assurance. - Write and maintain documentation for functionalities and APIs. - Integrate and validate third-party libraries and tools. - Manage source code using version control systems. - Implement algorithms for code generation and optimization. - Perform code refactoring for better maintainability and efficiency. - Stay updated with advancements in Data lineage technology. - Profile and benchmark compiler performance on various platforms. - Develop automated testing and verification of the codebase and functionality. - Provide technical support to teams using technical expertise. - Analyze performance metrics to identify areas for improvement. - Participate in design and architecture discussions. - Use static and dynamic analysis tools to improve code quality. - Collaborate with cross-functional teams. - Research new techniques and methodologies. - Contribute to and engage with open-source compiler projects. This job description offers a comprehensive overview of the responsibilities and qualifications required for the role. Additional job-related duties may be assigned as necessary.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
punjab
On-site
Americana Restaurants International PLC, a prominent player in the MENA region and Kazakhstan's Out-of-Home Dining industry, is a leading operator of Quick Service Restaurants (QSR) and casual dining establishments globally. With a vast network of over 2,600 restaurants across 12 countries in the Middle East, North Africa, and Kazakhstan, we are committed to delivering exceptional food, superior service, and memorable experiences with a team of more than 40,000 talented individuals. As part of our commitment to innovation and operational excellence, we have established our Center of Excellence in Mohali, India. This center focuses on product development, IT, Digital, AI, and Analytics, implementing global IT best practices to drive digital transformation across our operations worldwide. In this role, you will be responsible for designing, implementing, and supporting Oracle EPM Cloud solutions for planning, budgeting, forecasting, and consolidation. Your efforts will significantly improve financial visibility, accuracy, and decision-making processes across our multi-country operations. Key Responsibilities: - Configure and deploy EPBCS modules (Workforce, Financials, Capex, Projects, Custom) - Implement and maintain FCCS for close, consolidation, and reporting - Develop business rules, forms, dashboards, and Smart View templates - Manage data integration using Data Management / EPM Integration Agent - Collaborate with finance teams to support budgeting and closing cycles - Perform performance tuning, troubleshooting, and user support - Ensure metadata, security, and workflow alignment with best practices - Explore automation with EPM Automate and REST APIs where applicable Qualifications: - 5-7 years of hands-on experience with Oracle EPM Cloud - Minimum 2+ years of experience with EPBCS and FCCS - Strong understanding of FP&A, consolidation, intercompany eliminations, and currency translations - Proficient in Calculation Manager / Business Rules, Smart View / Web Studio Reporting, and Data Management / Integration Agent - Familiarity with Oracle EPM Automate, REST APIs, and metadata management - Excellent collaboration and communication skills Education & Qualifications: - Bachelor's degree in accounting, Finance, Computer Science, or a related field - Oracle EPM certifications (EPBCS and/or FCCS) preferred - Knowledge of additional modules like ARCS, TRCS, Narrative Reporting is a plus Join us at Americana Restaurants International PLC and be a part of our dynamic team driving innovation and excellence in the Out-of-Home Dining industry.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The OCI Data Catalog PoC Specialist will be responsible for designing, executing, and documenting a Proof of Concept (PoC) for Oracle Cloud Infrastructure (OCI) Data Catalog as part of the clients broader Data Governance strategy. You will lead the end-to-end delivery of the OCI Data Catalog PoC, including requirements gathering, solution design, configuration, and demonstration. Collaborating with client stakeholders to understand data governance objectives, data sources, and cataloguing needs will be a key aspect of your role. In this position, you will configure and integrate OCI Data Catalog with relevant data sources such as Oracle Autonomous Database, Object Storage, and on-premises databases. You will be required to develop and execute test cases to showcase metadata harvesting, data lineage, search, classification, and data stewardship features. Additionally, integrating catalog output with Marketplace application to export and automate metadata sharing will be part of your responsibilities. Documenting PoC outcomes, lessons learned, and recommendations for next steps is essential. You will also provide knowledge transfer and training to client teams on OCI Data Catalog capabilities and usage. Troubleshooting issues and liaising with Oracle support as needed during the PoC will also fall under your purview. The ideal candidate should have at least 3+ years of experience in data governance, data management, or cloud data solutions. Hands-on experience with Oracle Cloud Infrastructure (OCI), particularly OCI Data Catalog, is required. Familiarity with data catalog concepts such as metadata management, data lineage, data classification, and stewardship is necessary. Experience in integrating data catalogs with various data sources, both cloud and on-premises, is also crucial. Strong analytical, problem-solving, and communication skills are desired, along with the ability to document technical findings and present to both technical and business stakeholders.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the data engineering team at PepsiCo, you will play a crucial role in developing and overseeing data product build & operations. Your primary responsibility will be to drive a strong vision for how data engineering can proactively create a positive impact on the business. Working alongside a team of data engineers, you will build data pipelines, rest data on the PepsiCo Data Lake, and facilitate exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Your contributions will directly impact the design, architecture, and implementation of PepsiCo's flagship data products in areas such as revenue management, supply chain, manufacturing, and logistics. You will collaborate closely with process owners, product owners, and business users, operating in a hybrid environment that includes in-house, on-premise data sources as well as cloud and remote systems. Your responsibilities will include active contribution to code development, managing and scaling data pipelines, building automation and monitoring frameworks for data pipeline quality and performance, implementing best practices around systems integration, security, performance, and data management, and empowering the business through increased adoption of data, data science, and business intelligence. Additionally, you will collaborate with internal clients, drive solutioning and POC discussions, and evolve the architectural capabilities of the data platform by engaging with enterprise architects and strategic partners. To excel in this role, you should have at least 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture. You should also possess 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, along with expertise in SQL optimization, performance tuning, and programming languages like Python, PySpark, and Scala. Experience in cloud data engineering, specifically in Azure, is essential, and familiarity with Azure cloud services is a plus. You should have experience in data modeling, data warehousing, building ETL pipelines, and working with data quality tools. Proficiency in MPP database technologies, cloud infrastructure, containerized services, version control systems, deployment & CI tools, and Azure services like Data Factory, Databricks, and Azure Machine Learning tools is desired. Additionally, experience with Statistical/ML techniques, retail or supply chain solutions, metadata management, data lineage, data glossaries, agile development, DevOps, DataOps concepts, and business intelligence tools will be advantageous. A degree in Computer Science, Math, Physics, or related technical fields is preferred for this role.,
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
As a Reporting and Analytics Lead at HSBC, you will play a crucial role in managing a cross-functional team, strategic external suppliers, and business stakeholders. You will set clear development priorities, performance expectations, and accountability measures for supplier teams. Proactively managing risks, issues, and changes in scope will be essential to ensure alignment with business objectives. Your responsibilities will include reporting regularly on project status, updating Jira and Confluence, preparing project plans, providing mentoring and guidance to team members, and fostering a culture of collaboration and accountability focused on delivery excellence. You will oversee the ingestion and transformation of data from automated feeds and manual sources, implement robust data validation processes, and lead the transformation of legacy data into scalable, modular platforms. Driving automation, reducing manual interventions, and defining enterprise data standards will be key aspects of your role. Additionally, you will design fault-tolerant ETL/ELT pipelines, ensure data integrity across all stages of analysis, and mitigate risks associated with decision-support systems through validation and testing. In this role, you will act as a strategic partner in gathering and refining business requirements, conducting impact assessments, and translating business needs into clear documentation. You will build internal capability around data standardization, automation best practices, and documentation, ensuring that solutions meet both functional and non-functional business requirements. Engaging with business leaders and technical teams, you will facilitate decision-making, alignment, and lead workshops, presentations, and status meetings with diverse audiences. To be successful in this role, you should possess a Master's degree in Business, Computer Science, Engineering, or related fields, along with 12+ years of experience in project management, enterprise data infrastructure, or engineering roles. Strong background in Business Analytics, data standards, governance frameworks, and familiarity with data pipeline tooling, automation practices, and version control are required. Hands-on experience with data transformation tools, basic knowledge of Python and SQL scripting, and relevant certifications such as PMP, PRINCE2, Agile/Scrum, or CBAP are preferred. A background in Financial Services, Banking, or Enterprise IT environment is advantageous, as well as deep expertise in SQL Server, GCP platform, and large-scale ETL/ELT architecture. Your combination of technical skills, analytical acumen, collaborative abilities, and leadership mindset will enable you to contribute effectively to enhancing operational excellence and informed decision-making within the organization. Join HSBC and make a real impression by leveraging your expertise in reporting and analytics to drive impactful outcomes.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will have a key role in implementing and adopting the data governance framework to modernize Amgen's data ecosystem, positioning Amgen as a leader in biopharma innovation. This role involves utilizing cutting-edge technologies such as Generative AI, Machine Learning, and integrated data. Leveraging your domain, technical, and business process expertise, you will provide exceptional support for Amgen's data governance framework. Collaboration with business stakeholders and data analysts is essential to ensure successful implementation and adoption of the data governance framework. Working closely with the Product Owner and other Business Analysts is required to ensure operational support and excellence within the team. Responsibilities include: - Implementing the data governance and data management framework within a specific domain of expertise (Research, Development, Supply Chain, etc.). - Operationalizing the Enterprise data governance framework and aligning stakeholders with their data governance needs. - Enforcing standards and data reusability with Enterprise MDM and Reference Data. - Driving cross-functional alignment to ensure adherence to Data Governance principles. - Developing and implementing data quality frameworks and standards to ensure consistent, accurate, and reliable data. - Maintaining documentation on data definitions, standards, flows, legacy data structures, common data models, and data quality best practices for assigned domains. - Ensuring compliance with data privacy, security, and regulatory policies and collaborating with stakeholders and data stewards to address quality concerns. - Defining specifications and shaping the development of data foundations in collaboration with Technology teams and business functions. - Leading data quality initiatives to enhance data quality processes. Functional Skills: Must-Have Functional Skills: - Technical skills with knowledge of Pharma processes in a specialized domain. - Proficiency in data management, common data models, metadata management, data quality, master data management, data stewardship, and data protection. - Experience in data products development life cycle and enabling data dictionaries and business glossaries for increased data reusability and literacy. - Strong customer focus with excellent communication skills for working with internal stakeholders and external partners. - Experience with data governance frameworks such as Collibra and Alation. - Excellent problem-solving skills and attention to detail with proficiency in data analysis tools. Good-to-Have Functional Skills: - Experience with data governance councils or forums. - Familiarity with Agile software development methodologies. - 3-5 years of experience in data quality management, data governance, or related roles. Soft Skills: - Highly organized and able to work independently. - Strong analytical and assessment skills. - Effective collaboration with global, virtual teams. - Successful management of multiple priorities. - Team-oriented with a focus on achieving team goals. - Ambitious and committed to skill and career development. - Ability to build business relationships and understand end-to-end data needs. - Excellent interpersonal, presentation, and public speaking skills. - Initiative, self-motivation, attention to detail, and customer focus. Basic Qualifications: - Any Degree and 9-13 years of experience.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
The AVP, Principal Product Engineer at Synchrony plays a pivotal role in modernizing workloads by leading vendor refactoring efforts, executing break-fix tasks, and devising user enablement strategies. This role requires a profound understanding of AWS analytics services such as EMR Studio, S3, Redshift, Glue, and Tableau, coupled with robust skills in user engagement, training development, and change management. Collaboration with vendors, business users, and cloud engineering teams is essential to refactor legacy code, ensure seamless execution of fixes, and create comprehensive training materials and user job aids. Additionally, as the Principal Product Engineer, overseeing user testing, validation, and sign-offs is crucial for a smooth transition to modern cloud-based solutions, enhancing adoption, and minimizing disruptions. This role presents an exciting opportunity to lead cloud migration initiatives, bolster analytics capabilities, and drive user transformation efforts within an innovative cloud environment. The incumbent will be accountable for the technical success of the project, fostering a collaborative, efficient, and growth-oriented atmosphere within the development team. Key Responsibilities: - Lead and mentor a team of data/analytics/cloud engineers, ensuring adherence to best practices in data development, testing, and deployment. - Conduct thorough data analysis to reveal insights, trends, and anomalies that support business decisions. - Collaborate with cross-functional teams including BI, data science, and business stakeholders to comprehend data needs and translate them into technical solutions. - Guide the team in architecting solutions involving AWS cloud components like EMR, S3, Athena, Redshift, Sagemaker, and SAS Viya. - Support data lineage, cataloging, and metadata management efforts using tools on AWS. - Keep abreast of emerging technologies and recommend enhancements to the data platform architecture. Qualifications/Requirements: - Minimum 6+ years of expertise in Data warehousing and Enterprise Data Lake Architectures; alternatively, 8+ years of relevant experience in the absence of a degree. - Proficiency in crafting complex and optimized SQL queries for large-scale data analysis and transformation. - 2+ years of experience in SQL, Python, PySpark, AWS EMR, S3, and Athena. - Ability to lead and mentor a technical team, conduct code reviews, and enforce engineering best practices. - Familiarity with metadata management tools and cloud-native data engineering practices. Desired Characteristics: - Experience with AWS cloud services. - Certifications in AWS or any other cloud platform. - Proficiency in Agile project management methods and practices. - Capability to perceive the broader context beyond day-to-day coding tasks. - Excellent verbal, written, communication, and organizational skills. - Ability to empathize with team members" emotions and challenges, leading with compassion and support. - Delegation skills to effectively assign tasks and empower team members to solve problems autonomously. - Ability to innovate and implement new technologies, tools, or methodologies to enhance the development process. - Working knowledge of Tableau, SAS Viya, Stonebranch, and Hive is advantageous. Eligibility Criteria: - Minimum 6+ years of expertise in Data warehousing and Enterprise Data Lake Architectures; alternatively, 8+ years of relevant experience in the absence of a degree. Work Timings: 3 PM to 12 AM IST Note: The work timings may vary based on business needs and require flexibility between 06:00 AM Eastern Time to 11:30 AM Eastern Time for meetings with India and US teams. The remaining hours offer flexibility for the employee to choose. For Internal Applicants: - Understand the mandatory skills required for the role before applying. - Notify your manager and HRM before applying for any role on Workday. - Ensure that your professional profile is updated with relevant details and upload an updated resume. - No ongoing corrective action plan is allowed. - Only L9+ Employees who have completed 18 months in the organization and 12 months in the current role and level are eligible to apply. Grade/Level: 11 Job Family Group: Information Technology,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Ops Capability Deployment Analyst at our organization, you will be a seasoned professional contributing to the development of new solutions, frameworks, and techniques while improving processes and workflow for the Enterprise Data function. Your role will involve integrating subject matter and industry expertise within a defined area, requiring an in-depth understanding of how different areas collectively integrate within the sub-function to contribute to the overall business objectives. Your primary responsibility will be to perform data analytics and analysis across various asset classes, as well as to establish data science and tooling capabilities within the team. You will collaborate closely with the wider Enterprise Data team, particularly the front-to-back leads, to deliver on business priorities effectively. Joining the B & I Data Capabilities team within the Enterprise Data, you will be involved in managing the Data quality/Metrics/Controls program and implementing improved data governance and data management practices throughout the region. The focus of the Data quality program will be on enhancing our approach to data risk and meeting regulatory commitments in this area. Key Responsibilities: - Utilize data engineering background and expertise in Distributed Data platforms and Cloud services. - Demonstrate a sound understanding of data architecture and integration with enterprise applications. - Research and assess new data technologies, data mesh architecture, and self-service data platforms. - Collaborate with the Enterprise Architecture Team to define and refine the overall data strategy. - Address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities. - Conduct complex data analytics on large datasets including data cleansing, transformation, joins, and aggregation. - Develop analytics dashboards and data science capabilities for Enterprise Data platforms. - Communicate findings and propose solutions to stakeholders effectively. - Translate business and functional requirements into technical design documents. - Collaborate with cross-functional teams such as Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support. - Prepare handover documents and manage SIT, UAT, and Implementation processes. - Demonstrate a deep understanding of how the development function integrates within the overall business/technology to achieve objectives. - Perform other assigned duties as necessary. Skills & Qualifications: - 10+ years of active development background in Financial Services or Finance IT. - Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools. - Hands-on experience with ETL using PySpark on distributed platforms, data ingestion, Spark optimization, and batch orchestration. - Proficiency in Hive, HDFS, Airflow, and job scheduler. - Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy). - Ability to write complex SQL/Stored Procs. - Experience with DevOps, Jenkins/Lightspeed, Git, CoPilot. - Proficient in one or more BI visualization tools such as Tableau, PowerBI. - Proven experience in implementing Datalake/Datawarehouse for enterprise use cases. - Exposure to analytical tools and AI/ML is desired. Education: - Bachelor's/University degree, master's degree in information systems, Business Analysis, or Computer Science. If you are looking for a challenging opportunity where you can utilize your expertise in data analytics, data engineering, and data science, this role offers a dynamic environment where you can contribute to the growth and success of the Enterprise Data function within our organization.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
nagpur, maharashtra
On-site
As a Data Architect at our company, you will be responsible for designing scalable data architectures for web-based platforms or cloud-native systems. Your role will involve hands-on experience with relational and NoSQL databases such as PostgreSQL, MongoDB, and Cassandra. Additionally, you will work with cloud-based data services, data pipelines, and orchestration tools like Azure Data Services, AWS, GCP, Apache Airflow, and Azure Data Factory. In this role, you will have the opportunity to utilize your expertise in Big Data technologies including Spark, Kafka, and Delta Lake. A deep understanding of data modeling, ETL/ELT processes, and data lifecycle management will be crucial to your success in this position. Familiarity with cybersecurity, log/event data formats (e.g., syslog, JSON, STIX), and security telemetry is considered a strong advantage. Your responsibilities will include defining the data architecture and strategy for the CMP, ensuring alignment with product requirements and security standards. You will design and implement data models, data flows, and integration patterns for structured, semi-structured, and unstructured data. Collaboration with DevOps, engineering, and security teams will be essential to build scalable data pipelines and ensure real-time and batch processing capabilities. Moreover, you will be expected to select and integrate appropriate data storage and analytics technologies such as relational databases, data lakes, NoSQL, and time-series databases. Ensuring compliance with data governance, privacy, and security best practices will be a key aspect of your role. You will also establish data quality frameworks, metadata management, and lineage tracking to support analytics and reporting use cases with robust data architecture foundations. At our company, we offer a culture of caring where people come first. You will experience an inclusive culture of acceptance and belonging, building meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development, providing numerous opportunities to grow personally and professionally. You will have the chance to work on projects that matter, collaborating with clients globally to engineer impactful solutions. We believe in the importance of balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a work-life balance. As a high-trust organization, integrity is key, and you can trust GlobalLogic as a safe, reliable, and ethical global company. Join us in shaping the digital revolution, transforming businesses, and redefining industries through intelligent products, platforms, and services.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
In this role, you will be responsible for designing and implementing Adobe Experience Manager (AEM) Digital Asset Management (DAM) solutions to effectively manage digital assets. Your main tasks will include developing custom components, workflows, and integrations using AEM and Java. You will work closely with content creators, marketers, and developers to optimize asset usage and enhance content delivery. Your expertise will be crucial in ensuring metadata accuracy, asset tagging, and version control within the DAM system. You will also be tasked with troubleshooting and resolving issues related to asset ingestion, retrieval, and publishing. Furthermore, maintaining documentation and providing training and support for DAM users will be part of your responsibilities. To excel in this role, it is essential to stay updated with AEM best practices, new features, and industry trends. Your contributions will play a key role in shaping the digital content strategy and improving operational efficiency across the organization.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate-level position involving the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You should have at least 5+ years of application Ab-initio Metadata hub development experience along with a strong understanding of Data Lineage, metadata management, and reference data development and data analytics. A good knowledge of relational databases like Oracle, SQL/PLSQL is required, as well as strong knowledge in areas such as Data lineage, application development, and experience in Python or Java coding. Hands-on experience with any coding language and tool-based configuration is essential, as well as Full Software Development Kit (SDK) development cycle experience. Your role will involve pragmatic problem-solving and the ability to work independently or as part of a team. Proficiency in Ab-initio mHub or Python programming languages is necessary, as well as proficiency in at least one of the following programming languages: Java, API, Python. A passion for development, a strong work ethic, and a commitment to continuous learning are also important qualities. Preferred qualifications for this position include a Bachelor's degree in computer science or a related field, experience with relational databases (e.g., SQL/PLSQL, Oracle), experience with code development, metadata management, reference data, and Lineage tool, as well as experience in developing data lineage using a tool or custom code, and experience in Data management and coding language. Your responsibilities will include developing and maintaining application development for complicated enterprise data lineage, optimizing industry-based tools to simplify enterprise-level data complexity via data lineage, debugging and resolving graph-related issues, collaborating on designing and implementing new features to simplify complex problems, conducting code reviews for quality assurance, writing and maintaining documentation for functionalities and APIs, integrating and validating third-party libraries and tools, managing source code using version control systems, implementing algorithms for code generation and optimization, performing code refactoring for better maintainability and efficiency, staying updated with advancements in Data lineage technology, profiling and benchmarking compiler performance on various platforms, developing automated testing and verification of code base and functionality, providing technical support to teams using technical expertise, analyzing performance metrics to identify areas for improvement, participating in design and architecture discussions, using static and dynamic analysis tools to improve code quality, collaborating with cross-functional teams, researching new techniques and methodologies, and contributing to and engaging with open-source compiler projects. This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You will be working as a Data Engineer with 5~8 years of hands-on experience in Coimbatore, Hyderabad, or remotely. Your responsibilities will include utilizing the Databricks environment with PySpark to design, develop, and maintain scalable data pipelines. It is essential to have strong Data Analysis skills and experience in implementing ETL processes to ensure data quality and performance. Knowledge of Data warehousing concepts, data modeling, and metadata management will be advantageous. Additionally, good communication skills, especially customer interfacing skills, are required for this role.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |