Senior Data Engineer - Snowflake Coding

3 years

4 - 18 Lacs

Posted:1 day ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title: Data Engineer
Experience Required: 3 to 7 years
Location: US
Job Type: Full-time

Job Description:

We are looking for a skilled Data Engineer with strong experience in Snowflake, Azure, GCP, PySpark, and DBT to join our growing data team. The ideal candidate will be responsible for building and optimizing scalable data pipelines and data warehouse solutions to support business intelligence, analytics, and reporting needs.

Key Responsibilities:

  • Design, build, and maintain ETL/ELT pipelines using PySpark and DBT for efficient data ingestion, transformation, and processing.
  • Develop and optimize data warehouse structures and pipelines in Snowflake for performance and cost-efficiency.
  • Integrate data from diverse sources including APIs, files, and cloud platforms, particularly Azure (e.g., Azure Blob Storage, Data Factory, Azure Functions) and GCP (e.g., GCS, Cloud Functions, Dataflow).
  • Implement and manage data orchestration and workflow automation using DBT and scheduling tools (e.g., Apache Airflow, Azure Data Factory, GCP Composer).
  • Ensure data quality, consistency, and lineage through validation and monitoring processes.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver actionable data solutions.
  • Manage data governance and security policies, including role-based access, data masking, and tagging in Snowflake.
  • Troubleshoot and resolve data-related issues and optimize performance for data pipelines and workflows.

Required Skills & Qualifications:

  • 3+ years of experience in Data Engineering.
  • Strong hands-on experience with Snowflake: warehouse management, performance tuning, RBAC, data sharing, and cost optimization.
  • Experience with cloud platforms such as Microsoft Azure (e.g., Azure Blob Storage, Azure Functions, Azure Data Factory) and Google Cloud Platform (e.g., GCS, Dataflow, BigQuery).
  • Proficient in PySpark for distributed data processing.
  • Strong working knowledge of DBT (Data Build Tool) for data modeling and transformation.
  • Proficiency in SQL and scripting languages like Python.
  • Familiarity with CI/CD pipelines and version control systems (e.g., Git).
  • Solid understanding of data warehousing principles, data lakes, and big data processing best practices.

Preferred Qualifications:

  • Experience with real-time data streaming tools such as Kafka or Google Pub/Sub.
  • Familiarity with data cataloging tools (e.g., Azure Purview, Google Data Catalog, Amundsen).
  • Exposure to data visualization tools like Power BI, Tableau, or Looker.
  • Snowflake certification is a plus.
  • Strong communication skills and the ability to contribute effectively to team discussions and project planning.

Job Types: Full-time, Permanent

Pay: ₹473,019.95 - ₹1,824,238.69 per year

Benefits:

  • Cell phone reimbursement
  • Food provided
  • Health insurance
  • Paid sick time
  • Paid time off
  • Provident Fund

Schedule:

  • Day shift
  • Fixed shift
  • Monday to Friday

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You