Posted:13 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title:

Experience:

Location:

Interview Mode:

Domain Preference:

Notice Period:

Joining:

Position Overview

Senior Data Engineer

Key Capabilities

  • Passion for technology and keeping up with the latest trends
  • Ability to articulate complex technical issues and system enhancements
  • Proven analytical and evidence-based decision-making skills
  • Strong problem-solving, troubleshooting, and documentation abilities
  • Excellent written and verbal communication skills
  • Effective collaboration and interpersonal skills
  • High delivery focus with commitment to quality and auditability
  • Ability to self-manage and work in a fast-paced environment
  • Agile software development practices

Desired Skills & Experience

  • Hands-on experience in SQL and Big Data SQL variants (

    HiveQL, Snowflake ANSI, Redshift SQL

    )
  • Expertise in

    Python

    ,

    Spark

    (PySpark, Spark SQL, Scala), and

    Bash/Shell scripting

  • Experience with source code control tools (

    GitHub, VSTS, BitBucket

    )
  • Familiarity with Big Data technologies:

    Hadoop stack (HDFS, Hive, Impala, Spark)

    and cloud warehouses (

    AWS Redshift, Snowflake

    )
  • Unix/Linux command-line experience
  • AWS services exposure:

    EMR, Glue, Athena, Data Pipeline, Lambda

  • Knowledge of

    Data Models

    (Star Schema, Data Vault 2.0)

Essential Experience

  • 8–13 years

    of technical experience, preferably in the financial services industry
  • Strong background in Data Engineering/BI/Software Development, ELT/ETL, and data transformation in

    Data Lake / Data Warehouse / Lake House

    environments
  • Programming with Python, SQL, Unix Shell scripts, and PySpark in enterprise-scale environments
  • Experience in configuration management (

    Ansible, Jenkins, Git

    )
  • Cloud design and development experience with

    AWS and Azure

  • Proficiency with AWS services (

    S3, EC2, EMR, SNS, SQS, Lambda, Redshift

    )
  • Building data pipelines on

    Databricks Delta Lake

    from databases, flat files, and streaming sources
  • CI/CD pipeline automation (

    Jenkins, Docker

    )
  • Experience with Terraform, Kubernetes, and Docker
  • RDBMS experience:

    Oracle, MS SQL, DB2, PostgreSQL, MySQL

    – including performance tuning and stored procedures
  • Knowledge of Power BI (recommended)

Qualification Requirements

  • Bachelor’s or Master’s degree in a technology-related discipline (Computer Science, IT, Data Engineering, etc.)

Key Accountabilities

  • Design, develop, test, deploy, maintain, and improve software and data solutions
  • Create technical documentation, flowcharts, and layouts to define solution requirements
  • Write clean, high-quality, testable code
  • Integrate software components into fully functional platforms
  • Apply best practices for CI/CD and cloud-based deployments
  • Mentor other team members and share data engineering best practices
  • Troubleshoot, debug, and upgrade existing solutions
  • Ensure compliance with industry standards and regulatory requirements

Interview Drive Details

  • Mode:

    Face-to-Face (F2F)
  • Date:

    30th August

  • Location:

    Gurgaon
  • Notice Period:

    60 to 90 Days (Immediate joiners considered as a plus)

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Hyderabad, Telangana, India