Home
Jobs

Snowflake Data Engineer

3 - 8 years

6 - 16 Lacs

Posted:3 months ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Position Name-Snowflake Data Engineer Experience: 3+ years Key Skills: Snowflake, Kafka, ETL, SQL, Data Pipeline, Python, Knowledge in Power BI We are looking for a skilled Data Engineer with expertise in Snowflake and Apache Kafka to contribute to a technology-driven company specializing in innovative data solutions. This role involves designing, developing, and optimizing data pipelines to build a scalable and high-performance data ecosystem. The ideal candidate will be responsible for integrating various data sources, ensuring seamless data flow, and collaborating with cross-functional teams to support advanced analytics initiatives. What you'll be doing: Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering. Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion. Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation. Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools. Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions. Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements. Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools. Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed. Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack. What you need: Basic Skills: 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization. Strong experience with Apache Kafka for stream processing and real-time data integration. Proficiency in SQL and ETL/ELT processes. Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud. Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks. Familiarity with tools like dbt, Airflow, or similar orchestration platforms. Knowledge of data governance, security, and compliance best practices. Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues. Ability to work in a collaborative team environment and communicate effectively with cross-functional teams Responsibilities: Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability. Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms. Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations. Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion. Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views. Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle. Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations. Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information. Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering. Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform.

Mock Interview

Practice Video Interview with JobPe AI

Start Data Pipeline Interview Now

My Connections Intellikart Ventures Llp

Download Chrome Extension (See your connection in the Intellikart Ventures Llp )

chrome image
Download Now
Intellikart Ventures Llp
Intellikart Ventures Llp

Information Technology

N/A

51-200 Employees

3 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CTO

RecommendedJobs for You

Mumbai, Navi Mumbai, Mumbai (All Areas)

Gurugram, Chennai, Bengaluru