Posted:1 day ago|
Platform:
Hybrid
Full Time
• Design, develop, and maintain scalable and reliable data pipelines using AWS / Azure services, PySpark, and Databricks.
• Collaborate with cross-functional teams to understand data requirements, identify data sources, and define data ingestion strategies. • Implement data extraction, transformation, and loading (ETL) processes to enable efficient data integration from various sources. • Hands-on experience in developing and optimizing Databricks data pipelines using PySpark. • Proficient in SQL, Python, and ETL processes • Optimize and tune data pipelines to ensure high performance, scalability, and data quality. • Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner. • Collaborate with data scientists and analysts to provide them with clean, transformed, and reliable data for analysis and modeling. • Develop and maintain data documentation, including data lineage, data dictionaries, and metadata management. • Level of exp in Databricks: E4 • Should have worked in different functional perimeters (e.g. finance, HR, Geology, HSE)
CGI
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
kolkata, new delhi, bengaluru
0.5 - 3.0 Lacs P.A.
hyderabad, telangana, india
Salary: Not disclosed
chennai, bengaluru
15.0 - 30.0 Lacs P.A.
hyderabad, telangana, india
Salary: Not disclosed
puducherry, chennai, bengaluru
5.0 - 15.0 Lacs P.A.
hyderabad, chennai, bengaluru
2.5 - 3.0 Lacs P.A.
pune, chennai, bengaluru
27.5 - 32.5 Lacs P.A.
india
Experience: Not specified
Salary: Not disclosed
pune, chennai, bengaluru
27.5 - 35.0 Lacs P.A.
kochi, kozhikode
9.0 - 19.0 Lacs P.A.