Senior Data Engineer

5 - 10 years

8 - 13 Lacs

Posted:-1 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Roles and Responsibilities:

  • This role is for an ETL developer to design, develop, maintain, and test Data warehouse and Data Lake applications collaborating with the development team, testing team and business partners to ensure successful delivery and implementation of application Requirements.
  • Responsible for reviewing and understanding requirements and translating them into high-level including source to target mappings
  • Create and maintain optimal data pipeline for data warehouse and data lake
  • Identify, design and implement internal process improvements applicable both for On-Premises and Cloud Migrations: Automating manual processes, Optimizing data delivery, re-designing infrastructure for greater scalability.
  • Responsible for unit testing, debugging, and performance testing, with key focus on data accuracy and integrity
  • Responsible for development of job schedules that integrate with upstream and downstream systems Accountable for on-time delivery of all documentation that include data mapping, design, build, testing, and deployment

Experience: Required Skills:

  • Minimum of 5+ years of development experience in relational database, NoSQL , or another industry accepted database platforms.
  • 5 years of Proficiency in planning delivery of robust enterprise-grade data engineering pipelines using Python, Apache Spark, ETL
  • Minimum of 5 years in Data Analysis
  • Familiar with source code control version such as GIT
  • Experience in Creating high-level Design Artifacts.
  • Ability to present technical concepts to senior level business stakeholders.
  • Analytical and problem-solving skills.
  • Impeccable Communication skills verbal and written.
  • Should be a self-motivated worker.
  • Excellent interpersonal skills, positive attitude, team-player.
  • Willingness to learn and adapt to changes.
  • Strong time management and task prioritization

Desired Skills:

  • Experience with Data warehouse, Data Mart and Data Lake Technologies.
  • Experience with cloud technology - Azure Databricks, Azure Synapse, Azure Data Explorer, Azure SQL Database, Azure Data Factory, Azure Logic Apps, and Azure Delta Lake.
  • Familiar with distributed data (structured, semi-structured, unstructured, streaming) processing techniques using Apache Spark, Hadoop, Hive, Kafka, and big data ecosystem technologies.
  • Experience integrating with APIs
  • Minimum of 2+ years of Linux / shell scripting complementary
  • Prior experience of working with globally distributed teams
  • Agile driven development

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, telangana, india

pune, chennai, bengaluru

mumbai, maharashtra, india