eGrove Systems - Principal Data Engineer - ETL/Python

10 years

0 Lacs

Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Description

Key Responsibilities :

  • Data Architecture & Design: Lead the design and implementation of highly scalable and fault-tolerant data pipelines using modern platforms such as Azure Synapse, Snowflake, or Databricks workflows.
  • Pipeline Development: Develop, construct, test, and maintain robust data architectures, ensuring optimal data flow and ingestion from diverse sources.
  • Cloud Data Platform Expertise: Leverage expertise in the Azure Data Platform components, including Azure Data Factory (ADF) for orchestration, Azure Data Lake (ADLS) for storage, and Azure SQL Database for operational data storage.
  • Coding & Scripting: Apply strong proficiency in Python or Scala to develop complex data transformation logic and custom ETL/ELT processes.
  • SQL Optimization: Exhibit SQL mastery by writing advanced, efficient queries, performing performance tuning, and optimizing database schemas and procedures.
  • PL/SQL Development: Design and develop efficient PL/SQL procedures for data manipulation and business logic within relational databases.
  • Data Quality & Governance: Implement processes and systems to monitor data quality, ensuring accuracy, completeness, and reliability across all data assets.
  • Reporting & BI Support: Collaborate with BI analysts, providing clean, optimized data feeds and supporting the development of dashboards using tools like Power BI.
  • Cross-Cloud Integration: Utilize experience with general cloud platforms (AWS and Azure) and associated data services to inform architectural decisions and potential future integrations.

Required Skills & Qualifications

  • The successful candidate must possess 10+ years of progressive experience in data engineering, emphasizing architecture and optimization.

Technical Expertise

  • Cloud Data Warehousing: Extensive, hands-on experience designing and optimizing scalable data pipelines using modern platforms like Azure Synapse, Snowflake, or Databricks workflows.
  • Programming: Strong proficiency in Python or Scala for complex data processing, transformation, and ETL/ELT development.
  • Advanced query writing, performance tuning, and optimization.
  • Hands-on experience developing efficient PL/SQL procedures.
  • Experience with other relevant programming and scripting, including T-SQL and general data query optimization.
  • Azure Data Platform: Deep, hands-on experience with core Azure services: Azure Data Factory (ADF), Azure Data Lake (ADLS), and Azure SQL Database.
  • Cloud Platforms: Broad experience with major cloud platforms (e., AWS and Azure) and their associated services for data processing and storage.
  • Business Intelligence: Experience working with reporting and visualization tools, specifically Power BI.

Professional Competencies

  • Proven ability to work independently, manage multiple priorities, and meet deadlines in a fast-paced environment.
  • Excellent problem-solving, analytical, and communication skills.
  • Experience mentoring or leading technical teams is a significant plus
(ref:hirist.tech)

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You