Senior Data Engineer - Coimbatore

4 years

144 - 240 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About The Role

We are seeking a highly skilled

Senior Data Engineer

with expertise in

SQL, Python, and Spark

to join our team. The ideal candidate will have extensive experience in designing, building, and optimizing data pipelines on

AWS

,

Databricks

,

Snowflake

, and

Airflow

, enabling robust analytics and machine learning use cases. You will work closely with data scientists, analysts, and business stakeholders to ensure the availability, reliability, and performance of our data infrastructure.

Key Responsibilities

  • Data Pipeline Development:
    • Design, implement, and maintain scalable ETL/ELT pipelines using PySpark and Airflow.
    • Ingest data from various sources into Snowflake and other AWS-based storage and processing layers.
  • Data Architecture & Modeling:
    • Build and optimize data models for analytics and BI tools.
    • Ensure data consistency, integrity, and quality across multiple systems.
  • Performance Optimization:
    • Optimize Spark jobs and SQL queries for cost and performance.
    • Implement partitioning, caching, and efficient storage formats (Parquet, Delta).
  • Cloud & Platform Engineering:
    • Leverage AWS services such as S3, Lambda, Glue, EMR, and Redshift for large-scale data processing.
    • Work with Databricks to build unified data engineering and ML pipelines.
  • Automation & Orchestration:
    • Automate workflows using Apache Airflow (DAG creation, scheduling, monitoring).
    • Implement CI/CD pipelines for data workflows.
  • Data Governance & Security:
    • Ensure compliance with data governance, privacy, and security policies.
    • Implement monitoring, logging, and alerting for data pipelines.

Required Skills & Experience

  • Core Technical Skills:
    • Strong in SQL (complex joins, window functions, performance tuning).
    • Proficient in Python for data engineering use cases.
    • Expertise in Apache Spark (PySpark or Spark SQL).
  • Platform Expertise:
    • Hands-on experience with AWS data services (S3, Glue, EMR, Lambda, Redshift).
    • Strong working knowledge of Databricks for both ETL and ML workflows.
    • Experience with Snowflake (data loading, modeling, optimization).
    • Proficiency in Apache Airflow for orchestration.
  • Additional Skills:
    • Solid understanding of data lakehouse and warehouse architectures.
    • Experience with Delta Lake, Parquet, or ORC formats.
    • Familiarity with CI/CD and DevOps practices for data.

Preferred Qualifications

  • 4+ years of experience in data engineering.
  • Experience in building real-time/streaming pipelines (Kafka, Kinesis).
  • Knowledge of data governance tools and practices.
  • Exposure to Data Ops and data pipelines.
Skills: data,aws,spark,data engineering,snowflake,databricks

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You