Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Key Responsibilities :

  • Design, implement, and manage scalable data pipelines on AWS, utilizing services such as Glue, EMR, Kinesis, Lambda, Athena, and S3.
  • Lead a technical team or squad to deliver data engineering projects in a timely and efficient manner.
  • Ensure high-performance and fault-tolerant solutions for handling large-scale data sets and real-time data streams.
  • Collaborate with data scientists, analysts, and other engineers to gather requirements and build data
solutions for analytics and machine learning models.
  • Optimize existing data processes and pipelines for speed, scalability, and cost-efficiency.
  • Write and optimize complex queries in SQL, PySpark, and Python to process and transform large data sets.
  • Design and implement data architectures that support data lakes, data warehouses, and analytics platforms.
  • Monitor, troubleshoot, and resolve any issues related to data pipelines, ensuring high data quality and system reliability.
  • Ensure compliance with data governance, privacy, and security standards.

Qualifications & Experience :

Bachelors degree in Computer Science, Engineering, or a related field.

Experience

  • 5+ years of experience in Data Engineering or Software Engineering roles.
  • 3+ years of experience leading technical teams or squads in an engineering capacity.

Skills & Expertise

  • Deep expertise in AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, S3.
  • Advanced programming experience with PySpark, Python, and SQL.
  • Proven experience building, deploying, and managing scalable data pipelines on cloud platforms (preferably AWS).
  • Strong understanding of data architecture, big data platforms, and real-time data processing.
  • Experience with version control, CI/CD processes, and automation tools.
  • Strong problem-solving and troubleshooting skills.
  • Excellent communication and leadership abilities.

Preferred Skills

  • AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Big Data Specialty) would be a plus.
  • Experience with data warehousing solutions such as Amazon Redshift, Snowflake, or similar platforms.
  • Familiarity with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of data governance, security best practices, and data privacy regulations
(ref:hirist.tech)

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Mumbai Metropolitan Region

Gurugram, Haryana, India