Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Title:

Location:

Experience:

Employment Type:

Notice Period:

Mode of Interview:


About the Company

Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.


About the Role

Senior Data Engineer

Key Responsibilities

  • Design, develop, and maintain scalable

    data pipelines

    and ETL/ELT workflows.
  • Work with

    big data ecosystems

    to process and transform large datasets efficiently.
  • Build and optimize

    PySpark

    jobs for batch and streaming data processing.
  • Develop, manage, and optimize

    Snowflake

    data warehouse models and structures.
  • Leverage

    AWS cloud services

    (such as S3, Lambda, Glue, EMR, Redshift, IAM, etc.) for data engineering tasks.
  • Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements.
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Monitor and troubleshoot pipeline issues and performance bottlenecks.
  • Contribute to best practices, automation, and documentation.

Required Skills & Qualifications

  • 6+ years

    of experience as a Data Engineer or in a similar role.
  • Strong programming skills in

    Python

    .
  • Hands-on experience with

    PySpark

    for large-scale data processing.
  • Proficiency in

    Big Data technologies

    (Hadoop, Spark, Hive, etc.).
  • Practical experience building and managing solutions on

    AWS

    .
  • Strong knowledge of

    Snowflake

    (data modeling, performance tuning, query optimization).
  • Experience with CI/CD, version control (Git), and workflow orchestration tools (Airflow, Step Functions, etc.).
  • Solid understanding of data warehousing, data modeling, and ETL/ELT concepts.

Preferred Qualifications

  • Experience with real-time data processing (Kafka, Kinesis).
  • Exposure to DevOps practices and Infrastructure as Code (Terraform, CloudFormation).
  • Knowledge of data governance, security, and compliance best practices.
  • Strong analytical and problem-solving skills with attention to detail.


Important Note (Please Read Before Applying)

NOT apply

  • You have

    less than 6 years

    of experience
  • You do not have

    hands-on python ,AWS, Pyspark, Big data experience

  • You are on a

    notice period longer than 15 days

  • You are looking for

    remote only

    (role is hybrid in Chennai)
  • You are a

    fresher or unrelated background (e.g., support, testing only, non-Java roles)

ONLY if

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

chennai, bengaluru

hyderabad, pune, chennai, bengaluru, mumbai (all areas)