Data Engineer – Python | dbt | Redshift | AWS

3 years

0 Lacs

Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Data Engineer – Python | dbt | Redshift | AWS


About the Company


Catalyst Info Labs Pvt. Ltd.


Key Responsibilities


  • Design, develop, and maintain data pipelines and ETL/ELT processes to support analytics and business operations.
  • Build and optimize data models using dbt (data build tool) and manage transformations within Amazon Redshift.
  • Write and optimize SQL queries for data processing and reporting.
  • Ensure the reliability, performance, and scalability of data infrastructure on AWS.
  • Collaborate with cross-functional teams including data analysts, scientists, and business stakeholders to ensure data quality and usability.
  • Implement and manage version control using Git for dbt projects.
  • Support workflow orchestration using tools such as Airflow, Dagster, or Prefect.
  • Maintain documentation for data pipelines, models, and processes.


Required Technical Skills


  • Strong proficiency in Python for data processing, automation, and ETL pipeline development.
  • Hands-on experience with dbt for transformation and data modeling.
  • In-depth knowledge of Amazon Redshift architecture, optimization, and best practices.
  • Expertise in SQL, including complex queries, window functions, and performance tuning.
  • Understanding of data warehouse design principles, dimensional modeling, and star/snowflake schemas.
  • Experience working with AWS services such as S3, Lambda, Glue, and Step Functions.
  • Familiarity with IAM policies, data security, and access management within AWS.


Preferred Qualifications


  • Experience with CI/CD implementation for data pipelines.
  • Exposure to data governance and data lineage tools.
  • Experience with Snowflake, BigQuery, or other cloud data warehouses.
  • Knowledge of streaming technologies such as Kafka or Kinesis.
  • Experience with Infrastructure as Code tools like Terraform or CloudFormation.


Experience Levels


  • Junior (1–3 years): Exposure to dbt and Redshift.
  • Mid-level (3–5 years): Proven experience managing production data pipelines.
  • Senior (5+ years): Expertise in data architecture, performance optimization, and mentoring.


Soft Skills


  • Strong analytical and problem-solving skills.
  • Excellent verbal and written communication abilities.
  • Attention to detail with a focus on data quality, testing, and monitoring.
  • Ability to collaborate effectively in a fast-paced, cross-functional environment.


How to Apply


Interested candidates can apply directly on LinkedIn or send their resumes to hr@catalystinfolabs.com

 with the subject line: “Application – Data Engineer – [Your Name]”

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You