6 - 8 years

20 - 25 Lacs

Posted:9 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

  1. 5+ years of experience in data engineering, with strong focus on

    PySpark

    /Spark for big data processing.
  2. Expertise in building data pipelines and ingestion frameworks from relational, semi-structured (JSON, XML), and unstructured sources (logs, PDFs).
  3. Proficiency in Python with strong knowledge of data processing libraries.
  4. Strong SQL skills for querying and validating data in platforms like

    Amazon Redshift, PostgreSQL

    , or similar.
  5. Experience with distributed computing frameworks (e.g., Spark on EMR, Databricks).
  6. Familiarity with workflow orchestration tools (e.g., AWS Step Functions, or similar).
  7. Solid understanding of data lake / data warehouse architectures and data modeling basics.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Tata Consultancy Services logo
Tata Consultancy Services

Information Technology and Consulting

Thane

RecommendedJobs for You

madhavaram, tamil nadu, india