Data Engineer-Data Platforms-Azure

0 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Introduction

A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.

Role Overview

Your role and responsibilities

We are looking for a

Data Engineer with strong hands-on Databricks experience

who will design and optimize scalable data pipelines, work with Delta Lakehouse architectures, and enable advanced analytics across

Azure or AWS

platforms.

Key Responsibilities

  • Develop ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Lake.
  • Use Delta Live Tables for simplified pipeline orchestration.
  • Implement Databricks Auto Loader for real-time/batch data ingestion.
  • Build Databricks SQL dashboards and queries for reporting and analytics.
  • Manage Databricks clusters, jobs, and workflows ensuring cost efficiency.
  • Work with cloud-native services (ADF, Synapse, ADLS or AWS Glue, S3, Redshift) for data integration.
  • Apply Unity Catalog for role-based access and lineage tracking.
  • Collaborate with data scientists to support ML workloads using MLflow.

Mandatory Skills

Required technical and professional expertise

  • Strong Databricks expertise: PySpark, Spark SQL, Delta Lake (ACID, schema evolution, time travel).
  • Exposure to Delta Live Tables, Auto Loader, Unity Catalog, MLflow.
  • Hands-on with Azure or AWS data services.
  • Strong SQL and Python programming for data pipelines.
  • Knowledge of data modeling (star/snowflake, lakehouse).

Preferred Technical And Professional Experience

Good to Have

  • Streaming data experience (Kafka, Event Hub, Kinesis).
  • Familiarity with Databricks REST APIs.
  • Certification: Databricks Data Engineer Associate, Azure DP-203 / AWS Analytics Specialty.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
IBM logo
IBM

Information Technology

Armonk

RecommendedJobs for You