Data Architect (12+years)

15 years

0 Lacs

Posted:20 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

Key Responsibilities
  • Define and execute the enterprise data architecture strategy in alignment with business goals and future growth needs.
  • Architect, design, and implement Data Lakes, Data Warehouses, and Data Lakehouses on cloud platforms.
  • Lead modernization of data platforms using technologies such as

    Snowflake, Databricks, Azure Data Factory (ADF)

    , cloud storage, and related ecosystem services.
  • Design and implement robust batch and streaming data pipelines using frameworks such as

    Apache Spark

    , ensuring scalability, reliability, performance, and data integrity.
  • Develop frameworks for data ingestion, transformation, orchestration, and automation (ETL/ELT), including real-time streaming ingestion pipelines.
  • Establish and enforce data governance, metadata management, data cataloging, data lineage and quality standards across the data ecosystem.
  • Collaborate with business stakeholders, analytics/BI teams, data engineers, and data scientists to understand requirements and deliver data solutions aligned with enterprise needs.
  • Provide technical leadership, mentor data engineering teams, drive best practices, and ensure maintainability of the data platform.
  • Keep abreast of emerging data technologies and industry trends; recommend and adopt improvements to continuously evolve the data platform.
Required Skills & Experience
  • 12–15 years of overall IT experience, with at least 8+ years in a Data Architect / senior data-platform architect role.
  • Proven experience architecting and implementing Data Lakes, Data Warehouses, and Data Lakehouses at enterprise scale.
  • Strong hands-on expertise with Snowflake (preferred), Databricks, Azure Data Factory, cloud-native storage and services
  • Deep proficiency in Apache Spark (batch and streaming), data ingestion frameworks, ETL/ELT workflows, and orchestration.
  • Programming skills in

    Python

    for data engineering, automation, custom transformations, and scripting.
  • Strong knowledge of data modeling, schema design (logical and physical), relational and/or non-relational data stores, query optimization, and performance tuning.
  • Experience with real-time streaming and event-driven data frameworks (e.g., message queues, event hubs, streaming ingestion).
  • Solid understanding of cloud-native data ecosystem (preferably Azure, but AWS or GCP experience also relevant).
  • Experience establishing and enforcing data governance, data cataloging, metadata management, security, and compliance best practices.
  • Excellent communication, stakeholder management, cross-functional collaboration, and leadership skills.
Education & Background
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
  • Proven track record of designing and delivering large-scale data architecture solutions in enterprise environments.


Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You