Azure Databricks Data Engineer

5 - 8 years

5 - 12 Lacs

Posted:1 day ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Azure Databricks Data Engineer

Key Responsibilities

  • Big Data Solution Design & Implementation

    : Design and implement

    Hadoop big data solutions

    in alignment with business needs and project schedules.
  • ETL Development & Optimization

    :

    Code, test, and document new or modified data systems

    to create robust and scalable applications for data analytics. This includes developing and maintaining

    ETL pipelines

    using

    Azure Databricks, PySpark, and Azure Data Factory (ADF)

    .
  • Data Consistency & Collaboration

    : Work with other Big Data developers to ensure all data solutions are consistent. Partner with the business community to understand requirements and deliver user training.
  • Technology Research & Innovation

    : Perform

    technology and product research

    to better define requirements, resolve important issues, and improve the overall capability of the analytics technology stack. Evaluate and provide feedback on future technologies and new releases/upgrades.
  • Analytical Solutions Support

    : Support Big Data and

    batch/real-time analytical solutions

    leveraging transformational technologies.
  • Project Leadership

    : Work on multiple projects as a technical team member or drive

    user requirement analysis

    ,

    software application design and development

    ,

    testing and build automation tools

    , and research/incubation of new technologies and frameworks.
  • Cloud & Methodologies

    : Build solutions with

    public cloud providers

    such as Azure, leveraging experience with

    agile

    or other rapid application development methodologies and tools like

    Bitbucket, Jira, and Confluence

    .

Skills

  • Hands-on experience in Databricks stack

    .
  • Expertise in

    Data Engineering technologies

    (e.g., Spark, Hadoop, Kafka).
  • Proficiency in

    Streaming technologies

    .
  • Hands-on experience in

    Python and SQL

    .
  • Expertise in implementing

    Data Warehousing solutions

    .
  • Expertise in any

    ETL tool

    (e.g., SSIS, Redwood).
  • Good understanding of submitting jobs using

    Workflows, API CLI

    .
  • Strong experience in

    Azure Databricks, PySpark, and SQL (Mandatory)

    .
  • Hands-on experience in

    Azure Data Factory (ADF) (Mandatory)

    .
  • Project exposure to

    Cloud migration (Mandatory)

    .
  • Strong analytical and

    problem-solving skills

    .
  • Ability to work independently and take ownership of tasks.

Qualifications

  • Bachelor's degree in Information Technology, Computer Science, or a related field.

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Pradeepit Consulting Services logo
Pradeepit Consulting Services

Information Technology

Townsville

RecommendedJobs for You