Posted:6 days ago|
Platform:
Remote
Contractual
Job Title: Data Engineer â AWS Full Stack Location: India (Remote or Hybrid) Contract Type: Full-time, 1-Year Contract Experience Required: Minimum 5 years Start Date: Immediate Compensation: Competitive (Based on experience) About the Role We are seeking a highly skilled Data Engineer with deep expertise in the AWS ecosystem and full-stack data engineering . The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and analytics platforms that support critical business insights and decision-making. This is a 1-year contract role ideal for professionals who have experience across data ingestion, transformation, cloud infrastructure, and data operations. Key Responsibilities Design and build end-to-end data pipelines using AWS services (Glue, Lambda, S3, Athena, Redshift, EMR, etc.). Develop and manage ETL/ELT processes , ensuring data quality, scalability, and maintainability. Collaborate with product, analytics, and engineering teams to deliver data models, APIs, and real-time data solutions . Implement best practices for data governance, lineage, monitoring, and access control . Automate data workflows using tools like Airflow, Step Functions , or custom scripts. Create and maintain infrastructure as code (IaC) using CloudFormation or Terraform for AWS data components. Optimize data warehouse and lakehouse architectures for performance and cost. Required Skills & Qualifications 5+ years of experience in data engineering, including cloud-native data development. Strong expertise in AWS data services : Glue, S3, Lambda, Redshift, Athena, Kinesis, EMR, etc. Proficiency in SQL, Python, and Spark for data manipulation and transformation. Experience with DevOps tools (CI/CD, Git, Docker) and infrastructure automation. Knowledge of data modeling , schema design, and performance tuning for large-scale datasets. Ability to work independently in a contract environment , managing priorities and deadlines. Preferred Qualifications Familiarity with streaming data architectures using Kafka/Kinesis. Experience working in regulated or large-scale enterprise environments . Exposure to BI tools (e.g., QuickSight, Tableau, Power BI) and API integration for downstream consumption. Show more Show less
RPI AI LAB
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections RPI AI LAB
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
HyderÄbÄd
2.51046 - 7.5 Lacs P.A.