Home
Jobs

Pyspark Data Engineer

0 - 8 years

0 Lacs

Posted:2 days ago| Platform: Indeed logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Information

    Date Opened

    07/03/2025

    Job Type

    Full time

    Industry

    IT Services

    City

    Hyderabad

    State/Province

    Telangana

    Country

    India

    Zip/Postal Code

    500081

About Us

About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership.

Job Title: PySpark Data Engineer
Experience: 5 – 8 Years
Location: Hyderabad
Employment Type: Full-Time


Job Summary:


We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The ideal candidate will have 5–8 years of experience in designing and implementing data pipelines using PySpark, AWS Glue, and Apache Airflow, with strong proficiency in SQL. You will be responsible for building scalable data processing solutions, optimizing data workflows, and collaborating with cross-functional teams to deliver high-quality data assets.


Key Responsibilities:


  • Design, develop, and maintain large-scale ETL pipelines using PySpark and AWS Glue.

  • Orchestrate and schedule data workflows using Apache Airflow.

  • Optimize data processing jobs for performance and cost-efficiency.

  • Work with large datasets from various sources, ensuring data quality and consistency.

  • Collaborate with Data Scientists, Analysts, and other Engineers to understand data requirements and deliver solutions.

  • Write efficient, reusable, and well-documented code following best practices.

  • Monitor data pipeline health and performance; resolve data-related issues proactively.

  • Participate in code reviews, architecture discussions, and performance tuning.


Requirements

  • 5–8 years of experience in data engineering roles.

  • Strong expertise in PySpark for distributed data processing.

  • Hands-on experience with AWS Glue and other AWS data services (S3, Athena, Lambda, etc.).

  • Experience with Apache Airflow for workflow orchestration.

  • Strong proficiency in SQL for data extraction, transformation, and analysis.

  • Familiarity with data modeling concepts and data lake/data warehouse architectures.

  • Experience with version control systems (e.g., Git) and CI/CD processes.

  • Ability to write clean, scalable, and production-grade code.


Benefits

Company standard benefits.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

Hyderabad, Telangana, India

Pune, Gurugram, Bengaluru