Posted:2 weeks ago|
Platform:
Work from Office
Full Time
Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Your Key Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views. Participate in data migration projects and understand technologies like Delta Lake/warehouse. Debug and solve complex problems in data pipelines and processes. Your skills and experience that will help you excel Bachelor s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse.
MSCI Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
10.0 - 11.0 Lacs P.A.
9.0 - 13.0 Lacs P.A.
0.6 - 1.75 Lacs P.A.
Hyderabad, Ahmedabad
7.0 - 9.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 11.0 Lacs P.A.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Bengaluru
15.0 - 25.0 Lacs P.A.
4.0 - 12.0 Lacs P.A.
Hyderābād
4.375 - 9.5 Lacs P.A.
Mumbai
Experience: Not specified
3.45 - 7.0 Lacs P.A.