8 - 12 years

10 - 20 Lacs

Posted:None| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Key Responsibilities:

  • Development and Implementation:

    Design, develop, and deploy robust and scalable data pipelines for migrating data from various on-premise sources to cloud platforms.
  • Technical Expertise:

    Serve as a subject matter expert in the project's core technologies, including Datastage, Azure Data Factory (ADF), Databricks, and PySpark.
  • Solution Design:

    Collaborate with architects and technical leads to design efficient and effective cloud migration strategies and data transformation solutions.
  • Code Quality:

    Ensure the delivery of high-quality, well-documented, and maintainable code. Participate in code reviews and provide constructive feedback to peers.
  • Performance Optimization:

    Analyze and optimize the performance of data pipelines and applications to ensure efficiency and scalability.
  • Problem-Solving:

    Troubleshoot and resolve complex technical issues related to data migration, data quality, and system performance.
  • Collaboration:

    Work closely with cross-functional teams, including business analysts, data scientists, and other engineers, to understand requirements and deliver successful outcomes.
  • Mentorship:

    Provide guidance and technical support to junior engineers, helping them grow their skills and contribute effectively to the team.

Required Skills and Qualifications:

  • Minimum of 10 years of experience in the IT industry, with a focus on data engineering and ETL development.
  • Extensive hands-on experience with

    Datastage

    for building and managing ETL jobs.
  • Proven experience in

    Cloud Migration projects

    , with a strong understanding of the challenges and best practices involved in moving data to the cloud.
  • In-depth knowledge and hands-on experience with

    Azure Data Factory (ADF)

    for building and orchestrating data workflows.
  • Strong proficiency in

    Databricks

    and its ecosystem, including notebooks, jobs, and Delta Lake.
  • Expertise in

    PySpark

    for developing data processing applications on a distributed computing framework.
  • Solid understanding of cloud platforms, preferably

    Microsoft Azure

    .
  • Experience with SQL and database concepts.
  • Excellent analytical, problem-solving, and communication skills.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

gurugram, haryana, india

noida, uttar pradesh, india

hyderabad, telangana, india

noida, uttar pradesh, india

gurugram, haryana, india