Job
Description
About The Role
Project Role :Data Engineer
Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills :Data Engineering
Good to have skills :Python (Programming Language), Amazon Web Services (AWS), PySpark
Minimum 5 year(s) of experience is required
Educational Qualification :15 years full time education
Summary:As a Data Engineer, you will play a pivotal role in integrating, and maintaining the infrastructure needed to support data processing and analysis . You will collaborate with data scientists, analysts, and other stakeholders to ensure data availability, integrity, and usability for business intelligence and analytics purposes.
Roles & Responsibilities:
Own and enhance automation pipelines using Python.Work with AWS services including Lambda, Glue, API Gateway, DynamoDB, S3, Transfer Family, KMS, and IAM.Use SQL for data processing and integration tasks.Collaborate on data workflows; Snowflake and dbt experience is a plus.Manage code and CI/CD pipelines with GitLab.Crosstrain into Terraform for infrastructure automation. Professional & Technical
Skills:Strong Python programming experience.AWS Lambda, Glue, API Gateway, DynamoDB, S3, Transfer Family, KMS, and IAM.Hands-on AWS serverless and data services expertise.SQL proficiency.GitLab experience.Ability to work independently as an Independent Contributor.Nice to Have:Snowflake, dbt, Terraform experience or willingness to learn.
Additional Information:The candidate should have minimum 5 years of experience in Data Engineering.This position is based at our Bengaluru office.A 15 years full time education is required.
Qualification 15 years full time education