7 - 12 years

14 - 27 Lacs

Posted:6 days ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Overview:

Snowflake, dbt, and Apache Airflow

If you are passionate about data architecture, ELT best practices, and modern cloud data stack, we'd like to meet you.

Key Responsibilities:

Pipeline Design & Orchestration: Build and maintain robust, scalable data pipelines using Apache Airflow, including incremental & full-load strategies, retries, and logging.

Data Modelling & Transformation: Develop modular, tested, and documented transformations in dbt, ensuring scalability and maintainability.

Snowflake Development: Design and maintain warehouse in Snowflake, optimize Snowflake schemas, implement performance tuning (clustering keys, warehouse scaling, materialized views), manage access control, and utilize streams & tasks for automation.

Data Quality & Monitoring: Implement validation frameworks (null checks, type checks, threshold alerts) and automated testing for data integrity and reliability.

Collaboration: Partner with analysts, data scientists, and business stakeholders to translate requirements into scalable technical solutions.

Performance Optimization: Develop incremental and full-load strategies with continuous monitoring, retries, and logging and tune query performance and job execution efficiency.

Infrastructure Automation: Use Terraform or similar IaC tools to provision and manage Snowflake, Airflow, and related environments

Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications.

Qualifications:

Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field.

710 years of experience in data engineering, with strong hands-on expertise in:

o

Snowflake (data modelling, performance tuning, access control, streams & tasks, external tables)

o

Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling)

o

dbt (modular SQL development, Jinja templating, testing, documentation)

Proficiency in SQL and Python (Spark experience is a plus).

Experience building and managing pipelines on AWS, GCP, or Azure.

Strong understanding of data warehousing concepts and ELT best practices.

Familiarity with version control (Git) and CI/CD workflows.

Exposure to infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments.

Excellent problem-solving, collaboration, and communication skills, with the ability to lead technical projects.

Good to have:

o

Experience with streaming data pipelines (Kafka, Kinesis, Pub/Sub).

o

Exposure to BI/analytics tools (Looker, Tableau, Power BI).

o

Knowledge of data governance and security best practices

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You