Data Engineer

3 years

0 Lacs

Posted:4 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Responsibilities

  • Collaborate with project stakeholders (client) to identify product and technical requirements.
  • Develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data
  • Write clean, maintainable, and testable code for data workflows.
  • Troubleshoot data issues and perform root cause analysis

Must have:


  • 3+ years of Hands-on coding experience in

    PySpark & SQL

    .
  • Excellent verbal and business communication skills.
  • Worked on complex SQL Query and has work on query performance optimization



Good to have:

  • Experience working on large-scale

    data warehouse

    projects;

    Teradata

    experience is a plus.
  • Experience with

    ETL

    tools.
  • Experience working with workflow scheduler tools; experience with

    Apache Airflow

    is a plus.
  • Working experience with

    Kubernetes

    ,

    Unix, github

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
EXL logo
EXL

Business Process Management / Analytics

New York

RecommendedJobs for You

bengaluru, karnataka, india

bengaluru, karnataka, india

navi mumbai, pune, mumbai (all areas)

gurugram, haryana, india

chennai, tamil nadu, india