Job Title: ETL Developer Position Summary: We are seeking a skilled
ETL Developer to design, build, and maintain robust data pipelines for structured and semi-structured data across real-time, near-real-time, and batch workloads. The ideal candidate will be hands-on with modern ETL tools and data platforms, with strong experience in Snowflake, data integration, and data quality management.
Required Skillset Experience: - 7+ years of experience as an ETL Developer or Data Engineer.
- Proven expertise in ETL/ELT pipeline design for structured and semi-structured data.
- Hands-on experience with Qlik Replicate or equivalent real-time replication tools.
- Strong proficiency with Snowflake (data ingestion, SQL, performance tuning).
- Solid understanding of data integration from APIs and external data sources.
- Experience in data modeling, normalization , and relational database design.
- Knowledge of data governance , quality frameworks, and best practices.
- Proficient in Python and Java scripting for ETL logic and automation.
- Familiarity with CI/CD tools such as Git, Jenkins, or Azure DevOps.
- Experience working with distributed databases like YugabyteDB .
Key Responsibilities: - Design, develop, and maintain efficient ETL/ELT pipelines to ingest, transform, and load data from a variety of sources.
- Implement real-time and near-real-time replication using tools like Qlik Replicate or similar technologies.
- Work extensively with Snowflake , including data loading, transformation (SQL and scripting), and performance tuning .
- Integrate and manage multiple data sources , including flat files, databases, and third-party APIs .
- Develop and optimize data models , ensure proper data normalization , and maintain high-quality data structures.
- Ensure data governance , data lineage, and security compliance throughout the data lifecycle.
- Implement and support CI/CD pipelines for automated data pipeline deployments and testing.
- Write efficient Python and Java scripts for data manipulation, transformation, and automation tasks.
- Collaborate with DevOps and cloud teams to build scalable, fault-tolerant data workflows.
- Utilize and manage distributed SQL databases such as YugabyteDB .
- Perform data validation, error handling, and implement logging and monitoring for pipeline health and SLA compliance.