Posted:2 weeks ago| Platform:
Work from Office
Full Time
Key Responsibilities : Lead the design and development of scalable data pipelines using PySpark and ETL frameworks on Google Cloud Platform (GCP) . Own end-to-end data architecture and solutions, ensuring high availability, performance, and reliability. Collaborate with data scientists, analysts, and stakeholders to understand data needs and deliver actionable insights. Optimize complex SQL queries and support advanced data transformations. Ensure best practices in data governance, data quality, and security . Mentor junior engineers and contribute to team capability development. Requirements : 8+ years of experience in data engineering roles. Strong expertise in GCP data services (BigQuery, Dataflow, Pub/Sub, Composer, etc.). Hands-on experience with PySpark and building ETL pipelines at scale. Proficiency in SQL with the ability to write and optimize complex queries. Solid understanding of data modeling, warehousing, and performance tuning. Experience with CI/CD pipelines, version control, and infrastructure-as-code is a plus. Excellent problem-solving and communication skills. Preferred Qualifications : GCP Certification (e.g., Professional Data Engineer). Experience with Airflow, Kubernetes, or Terraform.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
INR 10.0 - 15.0 Lacs P.A.
INR 35.0 - 65.0 Lacs P.A.
Kolkata, Pune, Bengaluru
INR 30.0 - 40.0 Lacs P.A.
INR 25.0 - 35.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
INR 10.0 - 18.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
INR 20.0 - 27.5 Lacs P.A.
Chennai, Delhi / NCR
INR 16.0 - 30.0 Lacs P.A.
Kolkata, Hyderabad, Pune, Ahmedabad, Chennai, Bengaluru, Delhi / NCR, Mumbai (All Areas)
INR 6.0 - 16.0 Lacs P.A.
Hyderabad
INR 11.0 - 15.0 Lacs P.A.
Gurugram, Chennai
INR 15.0 - 22.5 Lacs P.A.