4 - 9 years

10 - 20 Lacs

Chandigarh

Posted:2 days ago| Platform: Naukri logo

Apply Now

Skills Required

Pyspark python Snowflake Data Bricks sql

Work Mode

Hybrid

Job Type

Full Time

Job Description

Design, build, and maintain scalable and reliable data pipelines on Databricks, Snowflake, or equivalent cloud platforms. Ingest and process structured, semi-structured, and unstructured data from a variety of sources including APIs, RDBMS, and file systems. Perform data wrangling, cleansing, transformation, and enrichment using PySpark, Pandas, NumPy, or similar libraries. Optimize and manage large-scale data workflows for performance, scalability, and cost-efficiency. Write and optimize complex SQL queries for transformation, extraction, and reporting. Design and implement efficient data models and database schemas with appropriate partitioning and indexing strategies for Data Warehouse or Data Mart. Leverage cloud services (e.g., AWS S3, Glue, Kinesis, Lambda) for storage, processing, and orchestration. Use orchestration tools like Airflow, Temporal, or AWS Step Functions to manage end-to-end workflows. Build containerized solutions using Docker and manage deployment pipelines via CI/CD tools such as Azure DevOps, GitHub Actions, or Jenkins. Collaborate closely with data scientists, analysts, and business stakeholders to understand requirements and deliver data solutions.

Mock Interview

Practice Video Interview with JobPe AI

Start Pyspark Interview Now

RecommendedJobs for You

Mumbai, Mumbai (All Areas)