5 - 10 years
10 - 20 Lacs
Posted:1 week ago|
Platform:
Hybrid
Full Time
Job Summary: We are looking for a highly skilled Senior AWS Data Engineer to design, develop, and lead enterprise-grade data solutions on the AWS cloud. This position requires a blend of deep AWS technical proficiency, hands-on PySpark experience, and the ability to engage with business stakeholders in solution design. The ideal candidate will build scalable, secure, and high-performance data platforms using AWS-native tools and best practices. Role & responsibilities: Design and implement scalable AWS cloud-native data architectures, including data lakes, warehouses, and streaming pipelines Develop ETL/ELT pipelines using AWS Glue (PySpark/Scala), Lambda, and Step Functions Optimize Redshift-based data warehouses including schema design, data distribution, and materialized views Leverage Athena, Glue Data Catalog, and S3 for efficient serverless query patterns Implement IAM-based data access control, lineage tracking, and encryption for secure data workflows Automate infrastructure and data deployments using CDK, Terraform, or CloudFormation Drive data modelling standards (Star/Snowflake, 3NF, Data Vault) and ensure data quality and governance Collaborate with data scientists, DevOps, and business stakeholders to deliver end-to-end data solutions Mentor junior engineers and lead code reviews and architecture discussions Participate in client-facing activities including requirements gathering, technical proposal preparation, and solution demos Must-Have Qualifications: AWS Expertise: Proven hands-on experience with AWS Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR, and Cloud Watch PySpark & Big Data: Minimum 2 years of hands-on PySpark/Spark experience for large-scale data processing ETL/ELT Engineering: Expertise in Python, dbt, or similar automation frameworks Data Modelling: Proficiency in designing and implementing normalized and dimensional models Performance Optimization: Ability to tune Spark jobs with custom partitioning, broadcast joins, and memory management CI/CD & Automation: Experience with GitHub Actions, Code Pipeline, or similar tools Consulting & Pre-sales: Prior exposure to client-facing roles including proposal drafting and cost estimation Good-to-Have Skills: Knowledge of Iceberg, Hudi, or Delta Lake file formats Experience with Athena Federated Queries and AWS OpenSearch Familiarity with Data Zone, Data Brew , and data profiling tools Understanding of compliance frameworks like GDPR, HIPAA, SOC2 BI integration skills using Power BI, Quick Sight, or Tableau Knowledge of event-driven architectures (e.g., Kinesis, MSK, Lambda) Exposure to lake house or data mesh architectures Experience with Lucid chart, Miro , or other documentation/storyboarding tools Why Join Us? Work on cutting-edge AWS data platforms Collaborate with a high-performing team of engineers and architects Opportunity to lead key client engagements and shape large-scale solutions Flexible work environment and strong learning culture
Pyramid It Consulting
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Pyramid It Consulting
Information Technology and Services
50 Employees
127 Jobs
Key People
Hyderabad / Secunderabad, Telangana, Telangana, India
4.0 - 7.0 Lacs P.A.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Hyderabad, Chennai, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai, Coimbatore, Bengaluru
10.0 - 20.0 Lacs P.A.
Gurugram
0.9 - 2.0 Lacs P.A.
Hyderabad
6.0 - 10.0 Lacs P.A.
Hyderabad
6.0 - 10.0 Lacs P.A.
Hyderabad
6.0 - 10.0 Lacs P.A.
Pune, Maharashtra, India
Salary: Not disclosed
Hyderabad
15.0 - 30.0 Lacs P.A.