Home
Jobs
Companies
Resume

4 Aws Data Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4 - 6 years

6 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Description Roles and responsibilities Design AWS architectures based on business requirements. Create architectural diagrams and documentation. Present cloud solutions to stakeholders. Skills and Qualifications: Design, develop, and maintain scalable ETL/ELT pipelines using AWS services like Glue, Lambda, and Step Functions. Work with batch and real-time data processing using AWS Glue, Kinesis, Kafka, or Apache Spark. Optimize data pipelines for performance, scalability, and cost-effectiveness. Identify bottlenecks and optimize query performance on Redshift, Athena, and Glue. Strong knowledge of AWS services: EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, CloudWatch, etc. Experience with serverless architectures (AWS Lambda, API Gateway, Step Functions). Experience of AWS networking (VPC, Route 53, ELB, Security Groups, etc.). Experience with AWS CloudFormation for automating infrastructure. Proficiency in scripting languages such as Python or Bash. Experience with automation tools (AWS Systems Manager, AWS Lambda) Experience of containerization (Docker, Kubernetes, AWS ECS, EKS, Fargate). Experience with AWS CloudWatch, AWS X-Ray, ELK Stack, or third-party monitoring tools. Experience with AWS database services (RDS, DynamoDB, Aurora, Redshift). Experience of storage solutions (S3, EBS, EFS, Glacier). Experience of AWS Direct Connect, Transit Gateway, and VPN solutions.

Posted 2 months ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description AWS Data engineer Hadoop Migration We are seeking an experienced AWS Principal Data Architect to lead the migration of Hadoop DWH workloads from on-premise to AWS EMR. As an AWS Data Architect, you will be a recognized expert in cloud data engineering, developing solutions designed for effective data processing and warehousing requirements of large enterprises. You will be responsible for designing, implementing, and optimizing the data architecture in AWS, ensuring highly scalable, flexible, secured and resilient cloud architectures solving business problems and helps accelerate the adoption of our clients data initiatives on the cloud. Key Responsibilities: Lead the migration of Hadoop workloads from on-premise to AWS-EMR stack. Design and implement data architectures on AWS, including data pipelines, storage, and security. Collaborate with cross-functional teams to ensure seamless migration and integration. Optimize data architectures for scalability, performance, and cost-effectiveness. Develop and maintain technical documentation and standards. Provide technical leadership and mentorship to junior team members. Work closely with stakeholders to understand business requirements, and ensure data architectures meet business needs. Work alongside customers to build enterprise data platforms using AWS data services like Elastic Map Reduce (EMR), Redshift, Kinesis, Data Exchange, Data Sync, RDS , Data Store, Amazon MSK, DMS, Glue, Appflow, AWA Zero-ETL, Glue Data Catalog, Athena, Lake Formation, S3, RMS, Data Zone, Amazon MWAA, APIs Kong Deep understanding of Hadoop components, conceptual processes and system functioning and relative components in AWS EMR and other AWS services. Good experience on Spark-EMR Experience in Snowflake/Redshift Good idea of AWS system engineering aspects of setting up CI-CD pipelines on AWS using Cloudwatch, Cloudtrail, KMS, IAM IDC, Secret Manager, etc Extract best-practice knowledge, reference architectures, and patterns from these engagements for sharing with the worldwide AWS solution architect community Basic Qualifications: 10+ years of IT experience with 5+ years of experience in Data Engineering and 5+ years of hands-on experience in AWS Data/EMR Services (e.g. S3, Glue, Glue Catalog, Lake Formation) Strong understanding of Hadoop architecture, including HDFS, YARN, MapReduce, Hive, HBase. Experience with data migration tools like Glue, Data Sync. Excellent knowledge of data modeling, data warehousing, ETL processes, and other Data management systems. Strong understanding of security and compliance requirements in cloud. Experience in Agile development methodologies and version control systems. Excellent communication an leadership skills. Ability to work effectively across internal and external organizations and virtual teams. Deep experience on AWS native data services including Glue, Glue Catalog, EMR, Spark-EMR, Data Sync, RDS, Data Exchange, Lake Formation, Athena, AWS Certified Data Analytics – Specialty. AWS Certified Solutions Architect – Professional. Experience on Containerization and serverless computing. Familiarity with DevOps practices and automation tools. Experience in Snowflake/Redshift implementation is additionally preferred. Preferred Qualifications: Technical degrees in computer science, software engineering, or mathematics Cloud and Data Engineering background with Migration experience. Other Skills: A critical thinker with strong research, analytics and problem-solving skills Self-motivated with a positive attitude and an ability to work independently and or in a team Able to work under tight timeline and deliver on complex problems. Must be able to work flexible hours (including weekends and nights) as needed. A strong team player

Posted 3 months ago

Apply

8 - 12 years

22 - 32 Lacs

Pune, Bangalore Rural, Hyderabad

Work from Office

Naukri logo

AWS Data,API Gateway Pipeline Engineer,AWS data warehousing, ETL pipelines,AWS services,Lambda,EventBridge,Step Functions,SNS,SQS,S3,Kinesis,Kafka, Python,Spark/PySpark SQL queries,Postgres,Redis Cache,DynamoDB CI/CD pipelines,GitHub,Agile

Posted 3 months ago

Apply

5 - 10 years

0 Lacs

Kolkata

Hybrid

Naukri logo

Skills Required: AWS Data Engineer- DATA-SERVICES-AWS Skills : Aws Glue; Aws Lambda; AWS RDS; AWS S3; Dynamo Db; PySpark Location : Bangalore Experience : 5+ Years only If interested, please share your updated resume with below details on meeta.padaya@ltimindtree.com Total Experience - Relevant Experience in AWS Data - AWS PAAS Services- PySpark- Available for F2F interview on 26th Apr, Saturday (Yes/No): Company - CCTC- ECTC- NP (If serving, kindly mention LWD) - Current / Preferred Location -

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies