Posted:3 days ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Position: Data Engineer (ETL / Python / PySpark / Azure / AWS)

Location:

Data Engineer

Key Responsibilities

  • Develop and maintain

    scalable, reliable, and high-performance data pipelines

    .
  • Analyze and organize raw data from multiple sources for ingestion, transformation, and modeling.
  • Evaluate business needs and build

    optimal data pipeline architecture

    .
  • Understand the technical environment and platform dependencies of the project.
  • Assemble large, complex datasets that meet both functional and non-functional requirements.
  • Design and develop

    analytics tools

    for actionable insights on business performance.
  • Work hands-on with

    Hadoop, Spark, Hive, HBase, Kafka

    , and

    ELK stack

    .
  • Work with

    NoSQL databases

    such as

    Cassandra

    or

    MongoDB

    .
  • Implement, monitor, and optimize ETL processes using

    Python, PySpark, and SQL

    .
  • Utilize

    cloud technologies (Azure/AWS)

    for data storage, orchestration, and deployment.
  • Participate in

    application, data, and infrastructure architecture

    planning.
  • Handle diverse data formats — JSON, text files, Kafka queues, and log data.
  • Support

    production systems

    and coordinate with teams for incident resolution.
  • Prepare documentation, progress reports, and risk assessments per project standards.

Required Skills

  • ETL Development

    – Python, PySpark, SQL
  • Big Data Technologies

    – Hadoop, Spark, Hive, HBase, Kafka, ELK stack
  • Databases

    – SQL, Cassandra, MongoDB
  • Cloud Platforms

    – Azure (Preferred) / AWS
  • Architecture Knowledge

    – Application, Data, and Infrastructure Architecture
  • Linux OS

    – Core principles, performance tuning
  • Data Modeling

    – Strong understanding of database design and data manipulation
  • Agile / DevOps

    working environment exposure

Qualifications

  • Bachelor’s or Master’s Degree in Computer Science, IT, or related field
  • 8+ years of experience in

    Data Engineering, ETL, or Big Data

    projects.
  • Strong understanding of

    cloud data services and distributed systems

    .
  • Excellent problem-solving and debugging skills.
  • Strong communication and stakeholder management ability.

Preferred Candidate Profile

  • Currently serving notice period or

    available to join within 10–15 days

    ..
  • Open to relocation to

    Pune or Trivandrum

    .

How to Apply

pratik.chavan@claidroid.com

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Claidroid Technologies logo
Claidroid Technologies

Information Technology

Innovation City

RecommendedJobs for You