Posted:22 hours ago|
Platform:
On-site
Contractual
Our Client is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. Job Title: Data Engineer Location: PAN India Experience: 5+ Job Type : Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: GCP- BigQuery, Dataproc, Airflow, Pyspark, Python, SQL. JD:- We are looking for a highly skilled Data Engineers with 5 to 9 years of experience in data engineering specializing in PySpark, Python, GCP, IAM CS DataProc BigQuery SQL Airflow and building data pipelines Handling TerabyteScale Data Processing The ideal candidate will have a strong background in designing developing and maintaining scalable data pipelines and architectures Key Responsibilities Design develop and maintain scalable data pipelines using PySpark, Python, GCP, and Airflow Implement data processing workflows and ETL processes to extract transform and load data from various sources into data lakes and data warehouses Manage and optimize data storage solutions using GCP services TerabyteScale Data Processing Developed and optimized PySpark code to handle terabytes of data efficiently Implemented performance tuning techniques to reduce processing time and improve resource utilization Data Lake Implementation Built a scalable data lake on GCP CS to store and manage structured and unstructured data Data Quality Framework Developed a data quality framework using PySpark and GCP to perform automated data validation and anomaly detection Improved data accuracy and reliability for downstream analytics Collaborate with data scientists analysts and other stakeholders to understand data requirements and deliver highquality data solutions Perform data quality checks and validation to ensure data accuracy and consistency Monitor and troubleshoot data pipelines to ensure smooth and efficient data processing Stay updated with the latest industry trends and technologies in data Show more Show less
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Chennai
4.0 - 7.0 Lacs P.A.
Bengaluru
25.0 - 30.0 Lacs P.A.
Mumbai
4.0 - 5.0 Lacs P.A.
Pune, Chennai, Bengaluru
0.5 - 0.5 Lacs P.A.
Chennai, Malaysia, Malaysia, Kuala Lumpur
7.0 - 11.0 Lacs P.A.
5.0 - 8.0 Lacs P.A.
Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad
25.0 - 35.0 Lacs P.A.
Hyderabad
25.0 - 30.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.