Posted:2 months ago| Platform:
Work from Office
Full Time
Job Category: Information Technology Job Code: Data Engineer II:Data Engineer II Job Title: Data Engineer II Keywords: Number of Positions: 1 Remaining Positions: 1 Duties: Skills Request : Data Engineer II No. of position - 1 Location - McKinsey Gurugram office , Hybrid mode (NOT REMOTE) Duties: You will be a core member of McKinsey analytics platform team responsible for extracting large quantities data from clients IT systems, developing efficient ETL and data management processes, and building architectures for rapid ingestion and dissemination of key data. Primary Responsibilities: Enhancements, new development, defect resolution and production support of ETL development using AWS native services. This is a senior role hence Total Experience - 9 plus is mandatory Experience with Data Modelling (worked with different data sets and its integrations). Integrate data sets using AWS services such as Glue, Lambda functions. Use AWS SNS to send emails and Alerts. Author ETL processes using Python, Pyspark. ETL process monitoring using CloudWatch events. Connecting with different data sources like S3 and validating data using Athena . Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 5 years. Hands on experience on Python: 3 years. PL/SQL experience: 3 years. CloudFormation and Terraform: 2 years CI/CD Github actions : 2 years Good understanding of AWS services like S3, SNS, Secret Manager, Athena and Lambda: 1 years. Additionally, familiarity with any of the following highly desirable: Jira, Github, Python, Snowflake. Skills: Primary Responsibilities: Enhancements, new development, defect resolution and production support of ETL development using AWS native services. This is a senior role hence Total Experience - 9 plus is mandatory Experience with Data Modelling (worked with different data sets and its integrations). Integrate data sets using AWS services such as Glue, Lambda functions. Use AWS SNS to send emails and Alerts. Author ETL processes using Python, Pyspark. ETL process monitoring using CloudWatch events. Connecting with different data sources like S3 and validating data using Athena . Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 5 years. Hands on experience on Python: 3 years. PL/SQL experience: 3 years. CloudFormation and Terraform: 2 years CI/CD Github actions : 2 years Good understanding of AWS services like S3, SNS, Secret Manager, Athena and Lambda: 1 years. Additionally, familiarity with any of the following highly desirable: Jira, Github, Python, Snowflake. Education: Bachelors degree in quantitative field like Computer Science, Engineering, Statistics, Mathematics or related field required. Advanced degree is a strong plus Languages: English Read Write Speak Attachments:
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru, Hyderabad
INR 3.5 - 8.5 Lacs P.A.
Mumbai, Bengaluru, Gurgaon
INR 5.5 - 13.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 3.0 - 7.0 Lacs P.A.
Chennai, Pune, Mumbai (All Areas)
INR 5.0 - 15.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 11.0 - 21.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 15.0 - 16.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 10.0 - 15.0 Lacs P.A.
Bengaluru, Hyderabad, Mumbai (All Areas)
INR 0.5 - 3.0 Lacs P.A.
Hyderabad, Gurgaon, Mumbai (All Areas)
INR 6.0 - 16.0 Lacs P.A.
Bengaluru, Noida
INR 16.0 - 22.5 Lacs P.A.