2 - 6 years
6 - 10 Lacs
Posted:-1 days ago|
Platform:
Work from Office
Full Time
You bring systems design experience with the ability to architect and explain complex systems interactions, data flows, common interfaces and APIs.
You bring a deep understanding of and experience with software development and programming languages such as Java/Kotlin, and Shell scripting.
You have hands-on experience with the following technologies as a senior software developer: Java/Kotlin, Spring, Spring Boot, Wiremock, Docker, Terraform, GCP services (Kubernetes, CloudSQL, PubSub, Storage, Logging, Dashboards), Oracle & amp; Postgres, SQL, PgWeb, Git, Github & amp; Github Actions, GCP Professional Data Engineering certification
Data Pipeline Development:
Designing, implementing, and optimizing data pipelines on GCP using PySpark for efficient and scalable data processing.
ETL Workflow Development:
Building and maintaining ETL workflows for extracting, transforming, and loading data into various GCP services.
GCP Service Utilization:
Leveraging GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc for data storage, processing, and analysis.
Data Transformation:
Utilizing PySpark for data manipulation, cleansing, enrichment, and validation.
Performance Optimization:
Ensuring the performance and scalability of data processing jobs on GCP.
Collaboration:
Working with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
Data Quality and Governance:
Implementing and maintaining data quality standards, security measures, and compliance with data governance policies on GCP.
Troubleshooting and Support:
Diagnosing and resolving issues related to data pipelines and infrastructure.
Staying Updated:
Keeping abreast of the latest GCP services, PySpark features, and best practices in data engineering.
Required Skills:
GCP Expertise: Strong understanding of GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc.
PySpark Proficiency: Demonstrated experience in using PySpark for data processing, transformation, and analysis.
Python Programming: Solid Python programming skills for data manipulation and scripting. Data Modeling and ETL: Experience with data modeling, ETL processes, and data warehousing concepts.
SQL: Proficiency in SQL for querying and manipulating data in relational databases.
Big Data Concepts: Understanding of big data principles and distributed computing concepts.
Communication and Collaboration: Ability to effectively communicate technical solutions and collaborate with cross-functional teams
NTT DATA, Inc.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now6.0 - 10.0 Lacs P.A.
Salary: Not disclosed
8.0 - 9.0 Lacs P.A.
Pune/Pimpri-Chinchwad Area
Experience: Not specified
Salary: Not disclosed
Pune/Pimpri-Chinchwad Area
Experience: Not specified
Salary: Not disclosed
6.0 - 10.0 Lacs P.A.
noida
9.0 - 13.0 Lacs P.A.
9.0 - 13.0 Lacs P.A.
bengaluru
13.0 - 17.0 Lacs P.A.
hyderabad
10.0 - 14.0 Lacs P.A.