Data Engineer -Snowflake

5 years

0 Lacs

Posted:5 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Skills : Snowflake + DBT + Iceberg + Apacheflink

Exp: 5 to 10 years

NP: Immediate to 15 Days

Location: Hyderabad


About the Role

Data Engineer

You will work closely with data analysts, data scientists, and platform engineers to ensure efficient, scalable, and reliable data workflows across the organization.


Key Responsibilities

  • Design, develop, and maintain

    data pipelines

    and

    data models

    using

    Snowflake

    and

    DBT

    .
  • Implement

    data transformations

    ,

    data validation

    , and

    metadata management

    using modern ELT frameworks.
  • Build and manage

    real-time data processing

    systems using

    Apache Flink

    (or similar stream-processing technologies).
  • Integrate batch and streaming data into

    data lakehouse environments

    (Iceberg, Delta Lake, or similar).
  • Optimize Snowflake performance through

    query tuning

    ,

    clustering

    , and

    data partitioning strategies

    .
  • Collaborate with cross-functional teams to understand business data needs and translate them into scalable solutions.
  • Ensure

    data quality, security, and governance

    best practices are followed throughout the pipeline lifecycle.
  • Support CI/CD automation for DBT and data infrastructure deployments.
  • Monitor, troubleshoot, and enhance data systems for performance and reliability.


Required Qualifications

  • 3–6 years

    of experience in

    Data Engineering

    or

    Analytics Engineering

    roles.
  • Strong expertise in

    Snowflake

    (data modeling, performance tuning, security, cost optimization).
  • Hands-on experience with

    DBT (Data Build Tool)

    — building modular, testable, and documented data models.
  • Proficiency in

    SQL

    and one programming language (e.g., Python, Java, or Scala).
  • Working knowledge of

    Apache Flink

    or other streaming frameworks (Kafka Streams, Spark Structured Streaming).
  • Experience with

    Apache Iceberg

    ,

    Delta Lake

    , or similar lakehouse formats.
  • Familiarity with

    cloud platforms

    (AWS, GCP, or Azure) and modern data orchestration tools (Airflow, Dagster, Prefect).
  • Strong understanding of

    data warehousing concepts

    ,

    ETL/ELT

    , and

    data governance

    .


Preferred Qualifications

  • Experience integrating Snowflake with

    streaming and lakehouse architectures

    .
  • Knowledge of

    modern data stack

    tools (Fivetran, Airbyte, Great Expectations, etc.).
  • Exposure to

    DevOps

    or

    DataOps

    principles (CI/CD, Git, Infrastructure as Code).
  • Background in

    real-time analytics

    ,

    event-driven architectures

    , or

    ML feature pipelines

    .


Soft Skills

  • Excellent problem-solving and analytical thinking.
  • Strong communication and collaboration across technical and non-technical teams.
  • Ownership mindset and ability to work independently in a fast-paced environment.


Education

  • Bachelor’s or master’s degree in computer science,

    Data Engineering

    ,

    Information Systems

    , or a related field (or equivalent experience).

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Tech Mahindra logo
Tech Mahindra

Information Technology & Services

Noida

RecommendedJobs for You