4 - 6 years
6.0 - 12.0 Lacs P.A.
Bengaluru
Posted:2 months ago| Platform:
Work from Office
Full Time
Must Have : GCP , PySpark Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Python and GCP. Collaborate with cross-functional teams to identify business requirements and design solutions. Develop scalable data architectures on Google Cloud Platform (GCP) using PySpark, BigQuery, Pub/Sub, etc. Ensure high-quality data processing by implementing robust testing frameworks such as unit tests, integration tests, and ETL processes. Participate in code reviews to ensure adherence to coding standards and best practices. Desired Candidate Profile 4-6 years of experience in Data Engineering with expertise in Python programming language. Strong understanding of big data technologies like Hadoop ecosystem (HDFS), Spark (PySpark), Kafka (Pub/Sub). Experience working with cloud-based platforms like GCP (BigQuery, Pub/Sub) is essential. Proficiency in writing efficient SQL queries for querying large datasets.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Mumbai, Bengaluru, Gurgaon
INR 32.5 - 37.5 Lacs P.A.
Chennai, Pune, Mumbai, Bengaluru, Gurgaon
INR 35.0 - 42.5 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 8.0 - 12.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 0.5 - 0.7 Lacs P.A.
INR 2.5 - 5.5 Lacs P.A.
INR 3.0 - 4.5 Lacs P.A.
Bengaluru
INR 3.0 - 3.0 Lacs P.A.
Bengaluru
INR 3.5 - 3.75 Lacs P.A.
INR 2.5 - 3.0 Lacs P.A.
INR 4.0 - 4.0 Lacs P.A.