Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

mportant Note (Please Read Before Applying)

NOT

less than 5 years

not have hands-on experience

notice period longer than 15 days

remote-only

non–data engineering background

ONLY

not


Role : AWS Data Engineer


Exp : 5 to 10 Years


Location : Hyderabad


Mode : Hybrid


Job Overview:

AWS cloud services and Python programming



Key Responsibilities

  • Design, develop, and maintain ETL/ELT data pipelines using Python and AWS native services (Glue, Lambda, EMR, Step Functions, etc.)
  • Build and manage data lakes and data warehouses using Amazon S3, Redshift, Athena, and Lake Formation
  • Implement data ingestion from diverse sources (RDBMS, APIs, streaming data, on-premise systems)
  • Optimize data workflows for performance, cost, and reliability using AWS tools like Glue Jobs, Athena, and Redshift Spectrum
  • Develop reusable, modular Python-based frameworks for data ingestion, transformation, and validation
  • Work with stakeholders to understand data requirements, model data structures, and ensure data consistency and governance
  • Deploy and manage data infrastructure using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation
  • Implement data quality, monitoring, and alerting using CloudWatch, Glue Data Catalog, or third-party tools
  • Support data security and compliance (IAM roles, encryption, data masking, GDPR policies, etc.)
  • Collaborate with DevOps and ML teams to integrate data pipelines into analytics and AI workflows

  • Required Skills & Qualifications

    • Bachelors or Master’s degree in Computer Science, Information Technology, or related field.
    • 5 to 8 years of experience as a Data Engineer or similar role.
    • Strong programming experience in Python (pandas, boto3, PySpark, SQLAlchemy, etc.)
    • Deep hands-on experience with AWS services, including:

    AWS Glue, Lambda, EMR, Redshift, Athena, S3, Step Functions

    IAM, CloudWatch, Kinesis (for streaming), and ECS/EKS (for containerized workloads)

    • Experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, MongoDB)
    • Strong knowledge of data modeling, schema design, and ETL orchestration.
    • Familiarity with version control (Git) and CI/CD pipelines for data projects.
    • Understanding of data governance, lineage, and cataloging principles.
    • Excellent problem-solving, debugging, and performance-tuning skills.


    Preferred / Nice-to-Have Skills

    • Experience with Apache Spark or PySpark on AWS EMR.
    • Exposure to Airflow, dbt, or similar workflow orchestration tools.
    • Knowledge of containerization (Docker, Kubernetes) and DevOps practices.
    • Experience with machine learning data pipelines or real-time streaming (Kafka, Kinesis).
    • Familiarity with AWS Glue Studio, AWS DataBrew, or AWS Lake Formation.


    Soft Skills

    • Strong analytical and communication skills.
    • Ability to work independently and in cross-functional teams.
    • Passion for automation and continuous improvement.

    Adaptability in fast-paced, evolving cloud environments.

    Mock Interview

    Practice Video Interview with JobPe AI

    Start Python Interview
    cta

    Start Your Job Search Today

    Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

    Job Application AI Bot

    Job Application AI Bot

    Apply to 20+ Portals in one click

    Download Now

    Download the Mobile App

    Instantly access job listings, apply easily, and track applications.

    coding practice

    Enhance Your Python Skills

    Practice Python coding challenges to boost your skills

    Start Practicing Python Now

    RecommendedJobs for You

    mumbai, maharashtra, india

    noida, uttar pradesh, india

    hyderabad, telangana, india