Posted:2 weeks ago| Platform:
Work from Office
Full Time
We are seeking a highly skilled Data Quality Manager with hands-on experience in SQL, PySpark, Databricks, Snowflake and CI/CD processes. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support our data analytics and business intelligence needs. You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and delivery of high-quality data. About the Role Key Responsibilities: Design, develop, and optimize data pipelines using PySpark to process and analyze large datasets. Write complex SQL queries for data extraction, transformation, and loading (ETL). Work with Databricks to build and maintain collaborative and scalable data solutions. Implement and manage CI/CD processes for data pipeline deployments to ensure seamless and efficient integration and deployment. Collaborate with data scientists and business analysts to understand data requirements and deliver appropriate solutions. Ensure data quality, integrity, and security across all data processes. Monitor and troubleshoot data pipelines and workflows to resolve issues promptly. Continuously improve data and code quality through automation and best practices. Qualifications: bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Proven experience with PySpark, including developing and tuning data processing applications. Advanced proficiency in SQL and experience in writing complex queries and optimizing them for performance. Hands-on experience with Databricks, including notebooks, clusters, and integration with other data tools. Strong understanding of CI/CD pipelines and experience with tools such as Jenkins, GitLab CI/CD, or Azure DevOps. Familiarity with cloud platforms (eg, AWS, Azure, Google Cloud) and related data services. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Knowledge of data warehousing concepts and tools (eg, Snowflake, Redshift). Good to have knowledge on kedro framework.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
11.0 - 16.0 Lacs P.A.
2.52 - 7.145 Lacs P.A.
Hyderabad, Telangana, India
Experience: Not specified
Salary: Not disclosed
Mumbai, Mumbai Suburban, Mumbai (All Areas)
5.0 - 8.0 Lacs P.A.
Mumbai
30.0 - 35.0 Lacs P.A.
Hyderabad
30.0 - 35.0 Lacs P.A.
Hyderabad
10.0 - 14.0 Lacs P.A.
11.0 - 21.0 Lacs P.A.
15.0 - 30.0 Lacs P.A.
4.0 - 7.0 Lacs P.A.