Posted:6 hours ago|
Platform:
Work from Office
Full Time
Mandatory skills AWS,Python,Pyspark,SQL,Databricks Role & responsibilities Design, develop, and maintain robust and scalable data pipelines using AWS services and Databricks. Implement data processing solutions using PySpark and SQL to handle large volumes of data efficiently. Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. Ensure data quality and integrity through rigorous testing and validation processes. Optimize data workflows for performance and cost-efficiency. Document data processes and provide support for data-related issues. Preferred candidate profile AWS Services: Proficiency in AWS services such as S3, EC2, Lambda, and Redshift. Programming: Strong experience in Python for data manipulation and scripting. Big Data Processing: Hands-on experience with PySpark for distributed data processing. SQL: Expertise in writing complex SQL queries for data extraction and transformation. Databricks: Experience in developing and managing workflows in Databricks environment.
Sparshcorp Support Solutions
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Gurugram
25.0 - 30.0 Lacs P.A.
11.0 - 20.0 Lacs P.A.
Hyderabad
10.0 - 20.0 Lacs P.A.
Kolkata, Gurugram, Bengaluru
9.5 - 19.5 Lacs P.A.
Hyderabad, Gurugram
20.0 - 32.5 Lacs P.A.
Chennai, Bengaluru
10.0 - 20.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
15.0 - 25.0 Lacs P.A.
Hyderabad
6.5 - 10.0 Lacs P.A.
Bengaluru
18.0 - 33.0 Lacs P.A.
Hyderabad
18.0 - 30.0 Lacs P.A.