Posted:2 days ago|
Platform:
Remote
Full Time
You have less than 5 years or more than 8 years of total experience
You do not have hands-on experience with key AWS data services (Glue, Lambda, EMR, Redshift, S3)
You have limited or no Python programming background
You are on a notice period longer than 30 days
You are looking for remote-only work (role may require hybrid/on-site collaboration)
You have no real-time experience in designing or maintaining ETL/ELT data pipelines
Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.
We are seeking a highly skilled Data Engineer with strong expertise in AWS cloud services and Python programming. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines, ensuring data availability, quality, and performance across enterprise systems. You will collaborate closely with data analysts, data scientists, and business stakeholders to deliver reliable, high-quality data solutions.
Total experience : 5 to 8 years
● Design, develop, and maintain ETL/ELT data pipelines using Python and AWS native services (Glue, Lambda, EMR, Step Functions, etc.)
● Build and manage data lakes and data warehouses using Amazon S3, Redshift, Athena, and Lake Formation
● Implement data ingestion from diverse sources (RDBMS, APIs, streaming data, on-premise systems)
● Optimize data workflows for performance, cost, and reliability using AWS tools like Glue Jobs, Athena, and Redshift Spectrum
● Develop reusable, modular Python-based frameworks for data ingestion, transformation, and validation
● Work with stakeholders to understand data requirements, model data structures, and ensure data consistency and governance
● Deploy and manage data infrastructure using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation
● Implement data quality, monitoring, and alerting using CloudWatch, Glue Data Catalog, or third-party tools
● Support data security and compliance (IAM roles, encryption, data masking, GDPR policies, etc.)
● Collaborate with DevOps and ML teams to integrate data pipelines into analytics and AI workflows
● Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
● 5 to 8 years of experience as a Data Engineer or similar role.
● Strong programming experience in Python (pandas, boto3, PySpark, SQLAlchemy, etc.)
● Deep hands-on experience with AWS services, including:
○ AWS Glue, Lambda, EMR, Redshift, Athena, S3, Step Functions
○ IAM, CloudWatch, Kinesis (for streaming), and ECS/EKS (for containerized workloads)
● Experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, MongoDB)
● Strong knowledge of data modeling, schema design, and ETL orchestration.
● Familiarity with version control (Git) and CI/CD pipelines for data projects.
● Understanding of data governance, lineage, and cataloging principles.
● Excellent problem-solving, debugging, and performance-tuning skills.
● Experience with Apache Spark or PySpark on AWS EMR.
● Exposure to Airflow, dbt, or similar workflow orchestration tools.
● Knowledge of containerization (Docker, Kubernetes) and DevOps practices.
● Experience with machine learning data pipelines or real-time streaming (Kafka, Kinesis).
● Familiarity with AWS Glue Studio, AWS DataBrew, or AWS Lake Formation.
✔ Strong expertise in AWS Cloud Services – Glue, Lambda, EMR, Step Functions, S3, Redshift, Athena
✔ Hands-on experience in Python programming (pandas, boto3, PySpark, SQLAlchemy, etc.)
✔ Proven experience in building and maintaining ETL/ELT data pipelines on AWS
✔ Experience with data lake and data warehouse design (S3, Redshift, Lake Formation)
✔ Strong SQL and data modeling skills (RDBMS, NoSQL – DynamoDB/MongoDB)
✔ Hands-on experience with IaC tools like Terraform or AWS CloudFormation
✔ Experience in data quality, monitoring, and governance (Glue Data Catalog, CloudWatch, IAM)
✔ Familiarity with Git, CI/CD pipelines, and Agile methodologies
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowhyderabad, telangana, india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed