8 - 13 years

25 - 40 Lacs

Posted:9 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Roles and Responsibilities:

  • Working with clients to understand their data.
  • Based on the understanding you will be building the data structures and pipelines.
  • You will be working on the application from end to end collaborating with UI and other development teams.
  • You will be working with various cloud providers such as Azure & AWS.
  • You will be engineering data using the Hadoop/Spark ecosystem.
  • You will be responsible for designing, building, optimizing and supporting new and existing data pipelines.
  • Orchestrating jobs using various tools such Oozie, Airflow, etc.
  • Developing programs for cleaning and processing data.
  • You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud.
  • Developing Data ingestion/process/integration pipelines effectively.
  • Creating Hive data structures, metadata and loading the data into data lakes / Bigdata warehouse environments.
  • Optimized (Performance tuning) many data pipelines effectively to minimize cost.
  • Code versioning control and git repository is up to date.
  • You should be able to explain the data pipeline to internal and external stakeholders.
  • You will be responsible for building and maintaining CI/CD of the data pipelines.

Preferred Qualifications:

  • Bachelors degree in computer science or related field.
  • Minimum of 5+ years working experience with Spark, Hadoop eco systems.
  • Minimum of 4+ years working experience on designing data streaming pipelines
  • Minimum experience of 3+ years on NoSQL and Spark Streaming.
  • Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc..
  • Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support)
  • Hands-on experience using Azures data engineering stack.
  • Should have implemented projects using programming languages such as Scala or Python.
  • Working experience on SQL complex data merging techniques such as windowing functions etc..
  • Hands-on experience with on-prem distribution tools such as Cloudera/Horton Works/MapR.

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Cynosure Corporate Solutions logo
Cynosure Corporate Solutions

Consulting

Anytown

RecommendedJobs for You

bengaluru, karnataka, india