Posted:3 weeks ago|
Platform:
Hybrid
Full Time
Key Responsibilities Design, develop, and maintain scalable ETL pipelines using PySpark . Build and orchestrate data workflows using Apache Airflow . Develop reusable Python modules for data ingestion and transformation. Collaborate with data scientists and analysts to understand data needs and build robust solutions. Optimize Spark jobs for performance and cost-efficiency. Monitor and troubleshoot data pipeline failures and latency issues. Required Skills Strong hands-on experience in Python programming. In-depth knowledge of PySpark and big data processing. Proficiency in developing and scheduling DAGs in Apache Airflow . Experience working with SQL , data lakes , and data warehouses . Familiarity with Git , Linux , and CI/CD tools.
Advance Career Solutions
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Advance Career Solutions
Business Consulting and Services
2-10 Employees
51 Jobs
Key People
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
Hyderābād
2.51046 - 7.5 Lacs P.A.