Home
Jobs

2 Snowflake Design Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

7 - 12 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Data Engineer Experience Level 6+ years in Data Engineering Key Responsibilities As a Data Engineer, you will: Data Engineering: Apply expertise in Snowflake, DBT, and Python to develop and manage data solutions. Cloud Integration: Utilize AWS Cloud services, specifically AWS S3 and Lambda, for data operations. SQL Development: Leverage strong SQL knowledge for data manipulation and querying. Mandatory Skills & Experience Technical Proficiency: SQL: Strong knowledge in SQL . Data Warehousing & Transformation: Expertise in Snowflake and DBT (minimum 2+ years of experience). Programming: Expertise in Python (minimum 2+ years of experience). Cloud Platform: Knowledge in AWS Cloud (specifically AWS S3 & Lambda). ETL/ELT Tools (Desirable): Knowledge of SnapLogic or Fivetran tools is an added advantage. Experience & Qualifications: 6+ years of experience in data engineering.

Posted 1 week ago

Apply

8.0 - 12.0 years

8 - 12 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Experience: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies