3 - 5 years

9 - 14 Lacs

Posted:-1 days ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Key Responsibilities:

• Design, develop, and maintain ETL pipelines using Azure Databricks notebooks and workflows. • Perform big data processing and analytics using Spark on the Databricks platform.

• Write optimized and efficient code using PySpark, SparkSQL, and Python.

• Implement and enhance data transformation and integration pipelines.

• Manage secrets and credentials securely using Key Vault and Databricks secret scopes

• Write and debug complex SQL queries (preferably PL/SQL and Spark SQL) for data retrieval and analysis.

• Troubleshoot and resolve issues related to Python, PySpark, and SQL.

• Collaborate with cross-functional teams to understand data requirements and deliver solutions.

• Use Git for version control and manage CI/CD pipelines for automated deployments.

Required Skills:

• Strong experience with Azure cloud services and cloud-native data engineering.

• Proficiency in Databricks, Spark, PySpark, and SparkSQL.

• Solid understanding of SQL variants, especially PL/SQL.

• Experience with Git and CI/CD tools and practices.

• Excellent problem-solving, communication, and collaboration skills

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

navi mumbai, pune, mumbai (all areas)

pune, chennai, bengaluru

chennai, bengaluru, delhi / ncr, remote