Posted:6 hours ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

We are seeking an experienced Data Architect with deep expertise in Databricks architecture to design, implement, and optimize large-scale data platforms and advanced analytics solutions. The ideal candidate will have strong hands-on experience with Azure Databricks, Delta Lake, Unity Catalog, and Lakehouse architectures, along with a solid understanding of data governance, data modeling, and modern data engineering best practices.

Key Responsibilities

  • Architect and design scalable and secure data lakehouse solutions using Databricks and cloud-native data services (Azure).
  • Define end-to-end data architecture, including ingestion, transformation, storage, and consumption layers.
  • Lead data modeling (conceptual, logical, physical) aligned with business and analytical requirements.
  • Develop and optimize ETL/ELT pipelines using PySpark, Spark SQL, and Databricks workflows.
  • Implement and enforce data governance, metadata management, and access controls via Unity Catalog and data lineage tools.
  • Collaborate with data engineers, data scientists, and business teams to define data requirements and architectural patterns.
  • Design CI/CD pipelines for Databricks notebooks and workflows using Terraform, GitHub Actions, or Azure DevOps.
  • Lead migration initiatives from legacy systems to Databricks Lakehouse.
  • Ensure data quality, performance tuning, and cost optimization across environments.
  • Stay current with emerging Databricks features, cloud technologies, and data architecture trends.

Required Skills & Experience

  • 10+ years of overall experience in data architecture, data engineering, or analytics platform design.
  • Hands-on experience with Databricks (cluster management, job orchestration, notebook optimization).
  • Strong expertise in:
o Apache Spark / PySpark
o Delta Lake / Delta Live Tables
o Unity Catalog & Databricks Workflows
o Lakehouse architecture principles
  • Solid understanding of data modeling, dimensional design, and data integration patterns.
  • Experience with Terraform for infrastructure automation in Databricks or cloud data platforms added advantage
  • Hands-on experience with Azure Data Lake (ADLS), Azure Synapse.
  • Proven experience with data governance, lineage, and cataloging tools (Purview, Collibra, Alation, or similar).
  • Strong proficiency in SQL, Python, and Spark performance tuning.
  • Familiarity with MLflow, Feature Store, and MLOps pipelines (advantageous).

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, chennai, bengaluru