Posted:2 days ago|
Platform:
Work from Office
Full Time
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Code:
Documentation:
Configure:
Test:
Domain Relevance:
Manage Project:
Manage Defects:
Estimate:
Manage Knowledge:
Release:
Design:
Interface with Customer:
Manage Team:
Certifications:
Skill Examples:
Knowledge Examples:
Knowledge Examples
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026 Project Overview: Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025 . The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency . Design, build, and maintain scalable data pipelines using Databricks and PySpark. Develop and optimize complex SQL queries for data extraction, transformation, and analysis. Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.). Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets. Ensure data quality, performance, and reliability across data workflows. Participate in code reviews, data architecture discussions, and performance optimization initiatives. Support migration and modernization efforts for legacy data systems to modern cloud-based solutions. Key Skills: Hands-on experience with Databricks , PySpark & Python for building ETL/ELT pipelines. Proficiency in SQL (performance tuning, complex joins, CTEs, window functions). Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.). Experience with data modeling, schema design, and performance optimization. Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred). Excellent problem-solving, communication, and collaboration skills.
Required SkillsDatabricks,Pyspark & Python,Sql,Aws Services
UST
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
hyderabad, pune, bengaluru
5.0 - 5.5 Lacs P.A.
bengaluru
20.0 - 30.0 Lacs P.A.
mumbai, delhi / ncr, bengaluru
20.0 - 35.0 Lacs P.A.
hyderabad, pune, chennai
12.0 - 16.0 Lacs P.A.
gandhinagar, visakhapatnam, warangal, hyderabad, chennai, coimbatore, aurangabad, noida, bhubaneswar, kolkata, mumbai, indore, nagpur, chandigarh, nashik, new delhi, mysuru, pune, ahmedabad, gurugram, bengaluru, vadodara
6.0 - 10.0 Lacs P.A.
6.0 - 10.0 Lacs P.A.
mumbai, hyderabad
4.0 - 8.0 Lacs P.A.
chennai
35.0 - 50.0 Lacs P.A.
mumbai, gurugram, bengaluru
5.0 - 6.0 Lacs P.A.
chennai
4.0 - 10.0 Lacs P.A.