7 years

2 - 8 Lacs

Posted:21 hours ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

Position Summary:
We are seeking a skilled ETL Developer to design, build, and maintain robust data pipelines for structured and semi-structured data across real-time, near-real-time, and batch workloads. The ideal candidate will be hands-on with modern ETL tools and data platforms, with strong experience in Snowflake, data integration, and data quality management.

Required Skillset & Experience:
  • 7+ years of experience as an ETL Developer or Data Engineer.
  • Proven expertise in ETL/ELT pipeline design for structured and semi-structured data.
  • Hands-on experience with Qlik Replicate or equivalent real-time replication tools.
  • Strong proficiency with Snowflake (data ingestion, SQL, performance tuning).
  • Solid understanding of data integration from APIs and external data sources.
  • Experience in data modeling, normalization, and relational database design.
  • Knowledge of data governance, quality frameworks, and best practices.
  • Proficient in Python and Java scripting for ETL logic and automation.
  • Familiarity with CI/CD tools such as Git, Jenkins, or Azure DevOps.
  • Experience working with distributed databases like YugabyteDB.

Key Responsibilities:
  • Design, develop, and maintain efficient ETL/ELT pipelines to ingest, transform, and load data from a variety of sources.
  • Implement real-time and near-real-time replication using tools like Qlik Replicate or similar technologies.
  • Work extensively with Snowflake, including data loading, transformation (SQL and scripting), and performance tuning.
  • Integrate and manage multiple data sources, including flat files, databases, and third-party APIs.
  • Develop and optimize data models, ensure proper data normalization, and maintain high-quality data structures.
  • Ensure data governance, data lineage, and security compliance throughout the data lifecycle.
  • Implement and support CI/CD pipelines for automated data pipeline deployments and testing.
  • Write efficient Python and Java scripts for data manipulation, transformation, and automation tasks.
  • Collaborate with DevOps and cloud teams to build scalable, fault-tolerant data workflows.
  • Utilize and manage distributed SQL databases such as YugabyteDB.
  • Perform data validation, error handling, and implement logging and monitoring for pipeline health and SLA compliance.

Qualifications

BE

Range of Year Experience-Min Year

5

Range of Year Experience-Max Year

7

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You