Posted:18 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Description opening for AWS Data Engineers ( : 6+ yearsWork timings : 1.00pm -10.00 p.m duration : 3 months (can be :

  • AWS Data Engineering, AWS Services(AWS Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, and IAM).
  • Python, PySpark, and Apache Spark,data modelling,on-prem/cloud data warehouse Table :
  • Cloud Platform AWS Data Engineering
  • AWS Services Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, IAM
  • Programming Python, PySpark, Apache Spark
  • Data Management Data Modelling, On-Prem/Cloud Data Warehouse
  • DevOps CI/CD, Automation, Deployment, Description :
We are seeking an experienced AWS Data Engineer with 6+ years of experience, strong understanding of large, complex, and multi-dimensional datasets. The ideal candidate will design, develop, and maintain scalable data pipelines and transformation frameworks using AWS native tools and modern data engineering technologies.

Role

The role requires hands-on experience in AWS Data Engineering services and strong data modelling expertise. Exposure to Veeva API integration will be a plus (not :
  • Design, develop, and optimize data ingestion, transformation, and storage pipelines on AWS.
  • Manage and process large-scale structured, semi-structured, and unstructured datasets efficiently.
  • Build and maintain ETL/ELT workflows using AWS native tools such as Glue, Lambda, EMR, and Step Functions.
  • Design and implement scalable data architectures leveraging Python, PySpark, and Apache Spark.
  • Develop and maintain data models and ensure alignment with business and analytical requirements.
  • Work closely with stakeholders, data scientists, and business analysts to ensure data availability, reliability, and quality.
  • Handle on-premises and cloud data warehouse databases and optimize performance.
  • Stay updated with emerging trends and technologies in data engineering, analytics, and cloud :
  • Mandatory: Proven hands-on experience with AWS Data Engineering stack, including but not limited to:
  • AWS Glue, S3, Redshift, EMR, Lambda, Step Functions, Kinesis, Athena, and IAM.
  • Proficiency in Python, PySpark, and Apache Spark for data transformation and processing.
  • Strong understanding of data modelling principles and ability to design and maintain conceptual, logical, and physical data models.
  • Experience working with one or more modern data platforms: Snowflake, Dataiku, or Alteryx (Good to have not mandatory)
  • Familiarity with on-prem/cloud data warehouse systems and migration strategies.
  • Solid understanding of ETL design patterns, data governance, and best practices in data quality and security.
  • Knowledge of DevOps for Data Engineering CI/CD pipelines, Infrastructure as Code (IaC) using Terraform/CloudFormation (Good to have not mandatory)
  • Excellent problem-solving, analytical, and communication candidate :
  • Qualification - Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field.
  • Experience with cloud data engineering tools/components/technologies such as AWS Glue, EMR, S3 & EC2.
  • Continual learning mindset to understand emerging trends in the data science field.
(ref:hirist.tech)

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, telangana, india

itanagar, arunachal pradesh, india