Posted:1 day ago|
Platform:
Hybrid
Full Time
Work Mode: Hybrid
Experience: 5 to 10 Years
Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift).
Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation.Optimize SQL queries for performance and scalability.Expertise in writing complex SQL queries and optimizing them for performance.Monitor, troubleshoot, and improve data pipelines for reliability and performance.
Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines,
ensuring data quality and integrity for various applications.
Globant
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowBengaluru, Karnataka, India
2.0 - 3.5 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Bengaluru
15.0 - 25.0 Lacs P.A.
15.0 - 25.0 Lacs P.A.
Hyderabad
0.6 - 1.0 Lacs P.A.
Hyderabad, Telangana, India
Experience: Not specified
Salary: Not disclosed
Pune, Gurugram, Bengaluru
18.0 - 22.5 Lacs P.A.
Hyderabad, Bengaluru
22.5 - 27.5 Lacs P.A.
Hyderabad, Pune, Bengaluru
0.5 - 3.0 Lacs P.A.
Kolkata, Hyderabad, Bengaluru
15.0 - 25.0 Lacs P.A.