Consultant | Data Engineer

2 - 4 years

8.0 - 12.0 Lacs P.A.

pune

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

t-sqlhealth insurancesparkdata processingschedulingdata qualitystored proceduresbig datadata warehousingmonitoring

Work Mode

Work from Office

Job Type

Full Time

Job Description

In one sentence We are looking for a Data Engineer (Consultant) - Pune with expertise in Databricks SQL, PySpark, Spark SQL, Airflow, and Azure Databricks , responsible for migrating SQL Server Stored Procedures, building scalable incremental data pipelines, and orchestrating workflows while ensuring data quality, performance optimization, and best practices in a cloud-based environment. Key Responsibilities... Migrate SQL Server Stored Procedures to Databricks Notebooks, leveraging PySpark and Spark SQL for complex transformations. Design, build, and maintain incremental data load pipelines to handle dynamic updates from various sources, ensuring scalability and efficiency. Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases, APIs, and file systems. Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time, adhering to Delta Lake best practices. Integrate Airflow with Databricks to orchestrate end-to-end workflows, including dependency management, error handling, and scheduling. Understand business and technical requirements, translating them into scalable Databricks solutions. Optimize Spark jobs and queries for performance, scalability, and cost-efficiency in a distributed environment. Implement robust data quality checks, monitoring solutions, and governance frameworks within Databricks. Collaborate with team members on Databricks best practices, reusable solutions, and incremental loading strategies. All you need is... Bachelor s degree in computer science, Information Systems, or a related discipline. 4+ years of hands-on experience with Databricks, including expertise in Databricks SQL, PySpark, and Spark SQL. (Must) Proven experience in incremental data loading techniques into Databricks, leveraging Delta Lakes features (e.g., time travel, MERGE INTO). Strong understanding of data warehousing concepts, including data partitioning, and indexing for efficient querying. Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks. Why you will love this job: Design and optimize large-scale data solutions using cutting-edge technologies. Work in a collaborative, fast-paced environment on innovative projects. Gain hands-on experience with Azure Databricks, Airflow, and big data processing . Enjoy career growth, learning opportunities, and a supportive work culture . Benefit from comprehensive perks , including health insurance and paid time off .

Cloud Consulting
Melbourne

RecommendedJobs for You