Posted:3 weeks ago| Platform:
Work from Office
Full Time
Responsibilities: Design, develop, and maintain scalable data pipelines using Python and AWS Redshift Optimize and tune Redshift queries, schemas, and performance for large-scale datasets Implement ETL/ELT processes to ensure accurate and timely data availability Collaborate with data analysts, engineers, and product teams to understand data requirements Ensure data quality, consistency, and integrity across systems Automate data workflows and improve pipeline efficiency using scripting and orchestration tools Monitor data pipeline performance and troubleshoot issues proactively Maintain documentation for data models, pipelines, and system configurations Ensure compliance with data governance and security standards
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
7.0 - 11.0 Lacs P.A.
11.0 - 21.0 Lacs P.A.
15.0 - 30.0 Lacs P.A.
4.0 - 7.0 Lacs P.A.
3.0 - 7.0 Lacs P.A.
4.0 - 6.0 Lacs P.A.
9.0 - 10.0 Lacs P.A.
20.0 - 25.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.