Posted:6 days ago|
Platform:
On-site
Role Proficiency:
Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Strategy & Planning:
Operational Management :
Project Control and Review :
Knowledge Management & Capability Development :
Requirement gathering and Analysis:
People Management:
Alliance Management:
Technology Consulting:
Innovation and Thought Leadership:
Project Management Support:
Stakeholder Management:
New Service Design:
Skill Examples:
Knowledge Examples:
We are seeking a highly experienced Lead Data Engineer with over 12 years of expertise in building scalable data solutions and modern data platforms. The ideal candidate will have a strong background in PySpark, SQL, Azure Databricks, and cloud platforms such as AWS or GCP. You will be responsible for leading the design and implementation of robust ETL/ELT pipelines, data modeling using Kimball/star schema, and mentoring a team of engineers in a fast-paced data-driven environment. Key Responsibilities: Design, develop, and maintain scalable and efficient ETL/ELT data pipelines to support data ingestion, processing, and analytics. Lead architecture and design discussions for big data solutions leveraging PySpark, SQL, and cloud-native tools (Azure Databricks, AWS Glue, GCP Dataflow, etc.). Implement and optimize data models using Kimball/star schema or similar dimensional modeling techniques. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver solutions. Ensure data quality, integrity, and governance across pipelines and data platforms. Drive the adoption of engineering best practices, code reviews, CI/CD, and automation in data workflows. Mentor and lead a team of data engineers, providing technical guidance and career development support. Required Skills & Qualifications: 12+ years of hands-on experience in Data Engineering or related fields. Strong expertise in PySpark and advanced SQL for data processing and transformation. Proven experience with Azure Databricks and at least one major cloud platform (AWS or GCP). Deep understanding of data warehousing concepts and dimensional data modeling (Kimball/star schema). Expertise in designing and building data pipelines using ETL/ELT frameworks. Familiarity with orchestration tools (e.g., Airflow, Azure Data Factory, Step Functions). Strong problem-solving skills and ability to manage large-scale data systems. Excellent communication and leadership skills, with experience leading engineering teams.
Python,Pyspark,Azure Databricks,Azure Data Factory
UST Global
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Java coding challenges to boost your skills
Start Practicing Java Nowthiruvananthapuram
Salary: Not disclosed
thiruvananthapuram, kerala
Salary: Not disclosed
trivandrum, kerala, india
Salary: Not disclosed
kochi, thiruvananthapuram
25.0 - 30.0 Lacs P.A.
thiruvananthapuram
5.13 - 10.04 Lacs P.A.
thiruvananthapuram, kerala
Salary: Not disclosed
kochi, kerala, india
Salary: Not disclosed
thiruvananthapuram, kerala
Salary: Not disclosed
Thiruvananthapuram
5.6 - 10.08 Lacs P.A.
Thiruvananthapuram, Kerala
Salary: Not disclosed