Senior Data Engineer

5 - 10 years

35 - 40 Lacs

Posted:20 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Senior

Responsibilities

  • Data Acquisition

    : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration.
  • SQL Development

    : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality.
  • Linux Command Line

    : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines.
  • Data Protection

    : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information.
  • Documentation

    : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications.
  • API Integration and Data Formats

    : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs.

Key Requirements:

  • 5+ years

    of experience as a 

    Data Engineer

    , focusing on 

    ETL development.

  • 3+ years

    of experience in 

    SQL

    and writing complex queries for data retrieval and manipulation.
  • 3+ years

    of experience in 

    Linux command-line and bash scripting.

  • Familiarity with data modelling in analytical databases.
  • Strong understanding of backend data structures, with experience collaborating with data engineers (

    Teradata, Databricks, AWS S3 parquet/CSV

    ).
  • Experience with RESTful APIs and AWS services like 

    S3, Glue, and Lambda

  • Experience using 

    Confluence

    for tracking documentation.
  • Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels.
  • Ability to work independently and manage multiple tasks and priorities in a dynamic environment.
  • Bachelors degree in Computer Science, Engineering, Information Technology, or a related field.

Good to Have:

  • Experience with Spark 
  • Understanding of data visualization tools, particularly Tableau.
  • Knowledge of data clean room techniques and integration methodologies.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You