Posted:8 hours ago|
Platform:
On-site
Full Time
• You have less than 8 years or more than 10 years of total experience
• You do NOT have strong Python + AWS Data Engineering experience
• You are NOT hands-on with Glue/EMR/Redshift/Athena
• You are on a notice period longer than 30 days
• You lack real experience in building data pipelines end-to-end
• You are from unrelated backgrounds (support/testing-only/non-data roles)
Random / irrelevant applications will not be processed.
Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.
• Architect, build, and optimize scalable data pipelines using AWS services (Glue, Lambda, EMR, Step Functions, Redshift)
• Design and manage data lakes and data warehouses on S3, Redshift, and Athena
• Develop Python-based ETL/ELT frameworks and reusable transformation modules
• Integrate diverse data sources including RDBMS, APIs, SaaS, Kinesis/Kafka
• Lead data modeling, schema design, and partitioning strategies for performance and cost efficiency
• Implement data quality, observability, and lineage using AWS Glue Data Catalog/Data Quality or equivalent tools
• Enforce strong data security, governance, IAM, encryption, and compliance practices
• Collaborate with Data Science, Analytics, DevOps, and Product teams to support ML/BI workloads
• Build CI/CD pipelines using CodePipeline, GitHub Actions, or similar
• Provide technical leadership, mentoring, and conduct code reviews
• Monitor and troubleshoot data infrastructure, ensuring high performance and reliability
✔ 5–10 years of hands-on experience in Data Engineering
✔ Expert-level Python (pandas, PySpark, boto3, SQLAlchemy)
✔ Deep experience with AWS Data Services:
• Glue, Lambda, EMR, Step Functions
• Redshift, DynamoDB, Athena, S3, Kinesis
• IAM, CloudWatch, CloudFormation/Terraform
✔ Strong SQL, data modeling & performance tuning expertise
✔ Proven experience building data lakes, warehouses, ETL/ELT pipelines
✔ Experience with Git, CI/CD, and DevOps concepts
✔ Strong understanding of data governance, quality, lineage, and security
• Apache Spark / PySpark on EMR or Glue
• Workflow orchestration tools (Airflow, dbt, Dagster)
• Real-time streaming: Kafka, Kinesis Data Streams/Firehose
• AWS Lake Formation, Glue Studio, DataBrew
• Exposure to ML/Analytics platforms (SageMaker, QuickSight)
• AWS Analytics or Solutions Architect Certification
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowhyderabad, telangana, india
Salary: Not disclosed
mysore, karnataka
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed
hyderabad, bengaluru
5.0 - 9.0 Lacs P.A.
noida, uttar pradesh, india
Salary: Not disclosed
4.0 - 9.0 Lacs P.A.
hyderabad, telangana, india
Salary: Not disclosed
chennai
3.0 - 7.0 Lacs P.A.
hyderabad
3.0 - 7.0 Lacs P.A.