Associate Data Architect II

0 years

0 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Role Description

Key Accountabilities / Responsibilities:
  • Provide technical direction and leadership to data engineers on data platform initiatives - ensuring adherence to best practices in data modelling, End to End pipeline design, and code quality.
  • Review and optimize PySpark, SQL, and Databricks code for performance, scalability, and maintainability.
  • Offer engineering support and mentorship to data engineering teams within delivery squads, guiding them in building robust, reusable, and secure data solutions.
  • Collaborate with architects to define data ingestion, transformation, and storage strategies leveraging Azure services such as Azure Data Factory, Azure Databricks, Azure Data Lakes
  • Drive automation and CI/CD practices in data pipelines using tools such as Git, Azure DevOps, and DBT (good to have).
  • Ensure optimal data quality and lineage by implementing proper testing, validation, and monitoring mechanisms within data pipelines.
  • Stay current with evolving data technologies, tools, and best practices, continuously improving standards, frameworks, and engineering methodologies.
  • Troubleshoot complex data issues, analyse system performance, and provide solutions to development and service challenges.
  • Coach, mentor, and support team members through knowledge sharing sessions, technical reviews.

Required Skills & Experience

  • Developer / engineering background of large-scale distributed data processing systems (or experience in equal measure). Can provide constructive feedback based on knowledge.
  • Proficient in designing scalable and efficient data models tailored for analytical and operational workloads, ensuring data integrity and optimal query performance.
  • Practical experience implementing and managing Unity Catalog for centralized governance of data assets across Databricks workspaces, including access control, lineage tracking, and auditing.
  • Demonstrated ability to optimize data pipelines and queries using techniques such as partitioning, caching, indexing, and adaptive execution strategies to improve performance and reduce costs.
  • Programming in Pyspark (Must), SQL(Must), Python (Good to have).
  • Experience with Databricks (Mandatory) and DBT (Good to have) is required
  • Implemented cloud data technologies on either Azure (Must) other optional GCP, Azure or AWS.
  • Knowledge around shortening development lead time and improving data development lifecycle
  • Worked in an Agile delivery framework.

Skills

Pyspark, SQL, Azure Databricks, AWS

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
UST logo
UST

IT Services and IT Consulting

Aliso Viejo CA

RecommendedJobs for You

trivandrum, kerala, india

kochi, chennai, thiruvananthapuram

chennai, tamil nadu, india

trivandrum, kerala, india

trivandrum, kerala, india

thiruvananthapuram, kerala