Home
Jobs

Senior AWS Data Engineer

5 - 10 years

10 - 20 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Job Summary: We are looking for a highly skilled Senior AWS Data Engineer to design, develop, and lead enterprise-grade data solutions on the AWS cloud. This position requires a blend of deep AWS technical proficiency, hands-on PySpark experience, and the ability to engage with business stakeholders in solution design. The ideal candidate will build scalable, secure, and high-performance data platforms using AWS-native tools and best practices. Role & responsibilities: Design and implement scalable AWS cloud-native data architectures, including data lakes, warehouses, and streaming pipelines Develop ETL/ELT pipelines using AWS Glue (PySpark/Scala), Lambda, and Step Functions Optimize Redshift-based data warehouses including schema design, data distribution, and materialized views Leverage Athena, Glue Data Catalog, and S3 for efficient serverless query patterns Implement IAM-based data access control, lineage tracking, and encryption for secure data workflows Automate infrastructure and data deployments using CDK, Terraform, or CloudFormation Drive data modelling standards (Star/Snowflake, 3NF, Data Vault) and ensure data quality and governance Collaborate with data scientists, DevOps, and business stakeholders to deliver end-to-end data solutions Mentor junior engineers and lead code reviews and architecture discussions Participate in client-facing activities including requirements gathering, technical proposal preparation, and solution demos Must-Have Qualifications: AWS Expertise: Proven hands-on experience with AWS Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR, and Cloud Watch PySpark & Big Data: Minimum 2 years of hands-on PySpark/Spark experience for large-scale data processing ETL/ELT Engineering: Expertise in Python, dbt, or similar automation frameworks Data Modelling: Proficiency in designing and implementing normalized and dimensional models Performance Optimization: Ability to tune Spark jobs with custom partitioning, broadcast joins, and memory management CI/CD & Automation: Experience with GitHub Actions, Code Pipeline, or similar tools Consulting & Pre-sales: Prior exposure to client-facing roles including proposal drafting and cost estimation Good-to-Have Skills: Knowledge of Iceberg, Hudi, or Delta Lake file formats Experience with Athena Federated Queries and AWS OpenSearch Familiarity with Data Zone, Data Brew , and data profiling tools Understanding of compliance frameworks like GDPR, HIPAA, SOC2 BI integration skills using Power BI, Quick Sight, or Tableau Knowledge of event-driven architectures (e.g., Kinesis, MSK, Lambda) Exposure to lake house or data mesh architectures Experience with Lucid chart, Miro , or other documentation/storyboarding tools Why Join Us? Work on cutting-edge AWS data platforms Collaborate with a high-performing team of engineers and architects Opportunity to lead key client engagements and shape large-scale solutions Flexible work environment and strong learning culture

Mock Interview

Practice Video Interview with JobPe AI

Start Pyspark Interview Now

My Connections Pyramid It Consulting

Download Chrome Extension (See your connection in the Pyramid It Consulting )

chrome image
Download Now
Pyramid It Consulting
Pyramid It Consulting

Information Technology and Services

Innovation City

50 Employees

127 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CTO

RecommendedJobs for You

Hyderabad / Secunderabad, Telangana, Telangana, India

Chennai, Tamil Nadu, India

Hyderabad, Chennai, Bengaluru