6 - 10 years

15 - 25 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Role & responsibilities

  • Design, develop, and optimize data pipelines, ETL/ELT processes, and data integration workflows.
  • Build and maintain data lakes, data warehouses, and real-time streaming pipelines.
  • Work with structured and unstructured data, transforming it into clean, usable datasets for analytics and machine learning
  • Collaborate with analytics, product, and engineering teams to understand data requirements and implement data models accordingly.
  • Ensure data quality, governance, lineage, and security best practices
  • Use programming languages and frameworks such as Python, SQL, PySpark, Databricks
  • Work with modern cloud data platforms like AWS (Redshift, Glue, S3)
  • Monitor and optimize data pipeline performance and troubleshoot any issues in data flow.
  • Document data processes and contribute to data engineering standards and frameworks

Must have skill

Python, SQL, PySpark, Databricks

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Sparshcorp Support Solutions logo
Sparshcorp Support Solutions

IT Services and Consulting

N/A

RecommendedJobs for You

Hyderabad, Telangana, India

Pune, Gurugram, Bengaluru

Kolkata, Hyderabad, Bengaluru