Data Engineer

4 - 5 years

0 Lacs

Posted:23 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title:

Data Engineer (AWS QuickSight, Glue, PySpark)

Location:

Noida

Job Summary:

We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization.


Key Responsibilities:

  • Design, develop, and maintain data pipelines

    using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources
  • Build and optimize data warehouse/data lake infrastructure

    on AWS, ensuring efficient data storage, processing, and retrieval
  • Develop and manage ETL processes

    to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting
  • Implement and maintain business intelligence dashboards

    using Amazon QuickSight, enabling stakeholders to derive actionable insights
  • Collaborate with cross-functional teams

    (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions
  • Ensure data quality, integrity, and security

    throughout the data lifecycle, implementing best practices for governance and compliance5.
  • Support self-service analytics

    by empowering internal users to access and analyze data through QuickSight and other reporting tools1.
  • Troubleshoot and resolve data pipeline issues

    , optimizing performance and reliability as needed


Required Skills & Qualifications:

  • Proficiency in AWS cloud services:

    AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies
  • Strong experience with PySpark

    for large-scale data processing and transformation
  • Expertise in SQL and data modeling

    for relational and non-relational databases
  • Experience building and optimizing ETL pipelines

    and data integration workflows
  • Familiarity with business intelligence and visualization tools

    , especially Amazon QuickSight
  • Knowledge of data governance, security, and compliance best practices

    5.
  • Strong programming skills in Python

    ; experience with automation and scripting
  • Ability to work collaboratively in agile environments

    and manage multiple priorities effectively
  • Excellent problem-solving and communication skills

    .


Preferred Qualifications:

  • AWS certification

    (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer)

Good to have skills -

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Pentair logo
Pentair

Industrial Machinery Manufacturing

London Golden Valley

RecommendedJobs for You

Gurugram, Haryana, India

Noida, Uttar Pradesh, India

Andhra Pradesh, India