Posted:14 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About the Company

Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.

Position:

Experience:

Location:

Employment Type:

About the Role

Data Engineer

Key Responsibilities

  • Design, develop, and maintain

    scalable and high-performance ETL/ELT pipelines

    using PySpark, Python, and SQL.
  • Build and optimize

    Big Data processing workflows

    using Spark, Hive, and other distributed data frameworks.
  • Develop ETL/ELT frameworks and workflows on

    AWS cloud

    using services such as S3, Glue, EMR, Lambda, Step Functions, and Redshift.
  • Implement and maintain enterprise-grade

    Snowflake

    data models, warehouses, data sharing, and performance tuning.
  • Collaborate with data architects to design end-to-end data architecture, ensuring scalability, security, and reliability.
  • Perform

    data ingestion, transformation, quality validation

    , and metadata management across various structured and unstructured data sources.
  • Build CI/CD pipelines for data applications using tools like Git, Jenkins, or AWS CodePipeline.
  • Monitor and optimize data pipelines for cost efficiency, performance, and reliability.
  • Troubleshoot production issues and implement solutions for long-term stability and scalability.
  • Ensure compliance with data governance, security best practices, and industry standards.

Required Skills & Qualifications

  • 6+ years

    of hands-on experience as a Data Engineer or similar role.
  • Strong proficiency in

    PySpark

    ,

    Spark SQL

    , and Big Data processing frameworks.
  • Advanced programming skills in

    Python

    (data processing, automation, modular coding).
  • Deep experience with

    AWS cloud services

    (S3, EMR, Glue, Lambda, IAM, Redshift, Athena).
  • Hands-on experience working with

    Snowflake

    —data modeling, pipelines, SnowSQL, performance tuning.
  • Strong SQL skills with experience in optimizing complex queries.
  • Experience with workflow orchestration tools (Airflow / AWS Step Functions / Apache Oozie).
  • Knowledge of DevOps practices, CI/CD, and version control (Git).
  • Experience handling large-scale, distributed data systems in production environments.
  • Strong understanding of data warehousing, ETL/ELT patterns, and best practices.

Nice-to-Have Skills

  • Experience with streaming technologies (Kafka, Kinesis).
  • Knowledge of infrastructure-as-code (Terraform / CloudFormation).
  • Familiarity with data catalog and governance tools (AWS Glue Data Catalog, Collibra, Alation).
  • Exposure to ML data pipelines or MLOps frameworks.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, chennai, bengaluru

hyderabad, chennai, bengaluru