Hiring Data Engineer (Big Data) Professionals - Remote

5 - 10 years

12 - 20 Lacs

Posted:3 weeks ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

SUMMARY
Summary:
We are seeking a passionate and driven Data Engineer to play a pivotal role in building and scaling high-quality data products within our analytics ecosystem. This individual will be instrumental in designing, developing, and maintaining robust, scalable, and performant ETL pipelines that power data-driven decision-making across the organization. The ideal candidate combines technical excellence with a collaborative mindset, a curiosity for solving complex problems, and a strong commitment to continuous learning and innovation. You will work closely with cross-functional teams to deliver data solutions that align with both business objectives and technical best practices, contributing to the evolution of our data infrastructure on AWS or Azure.

Experience: 5-10 years

Location: Remote

Responsibilities:
  • Design, develop, and manage scalable ETL/ELT pipelines using Apache Spark (Scala) with a focus on performance, reliability, and fault tolerance
  • Optimize Spark-based data processing workflows through deep understanding of execution plans, memory management, and performance tuning techniques
  • Build and maintain data pipelines that extract, transform, and load data from diverse sources into enterprise data warehouses, data lakes, or data mesh architectures hosted on AWS or Azure
  • Collaborate with data architects, analysts, and product teams to define data requirements and deliver effective, scalable solutions
  • Implement end-to-end data orchestration using AWS Step Functions or Azure Logic Apps, ensuring reliable workflow execution and monitoring
  • Utilize AWS Glue, Crawler, or Azure Data Factory and Data Catalog to automate data discovery, metadata management, and pipeline automation
  • Monitor pipeline health, troubleshoot issues, and enforce data quality standards across all stages of the data lifecycle
  • Maintain high coding standards, produce comprehensive documentation, and actively contribute to both high-level (HLD) and low-level (LLD) design discussions


Requirements
Requirements:
  • 4+ years of progressive experience developing data solutions in large-scale Big Data environments
  • 3+ years of hands-on experience with Python, Apache Spark, and Apache Kafka
  • Proficient in AWS or Azure cloud platforms, with demonstrated experience in core data services (e.g., S3, Lambda, Glue, Data Factory, etc.)
  • Strong expertise in SQL and NoSQL data technologies, including schema design and query optimization
  • Solid understanding of data warehousing principles, dimensional modeling, and ETL/ELT methodologies
  • Familiarity with High-Level Design (HLD) and Low-Level Design (LLD) processes and documentation
  • Excellent written and verbal communication skills, with the ability to collaborate effectively across technical and non-technical teams
  • Passion for building reliable, high-performance data products and a continuous drive to learn and improve
${job_id}

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
2coms logo
2coms

Recruitment & Consulting

Bangalore

RecommendedJobs for You