Data Engineer (Pyspark, PostgreSQL)

10 - 15 years

25 - 40 Lacs

Gurugram, Bengaluru

Posted:1 week ago| Platform: Naukri logo

Apply

Skills Required

Pyspark Terraform Postgresql AWS Python

Work Mode

Hybrid

Job Type

Full Time

Job Description

What youll do : You will play a critical role in designing, building, and maintaining high-quality data pipelines and infrastructure in adherence to data engineering standards and best practices. You will actively collaborate with team members, product teams in different portfolios and stakeholders alike. You will be an active learner: tinkering with new products and services, using unfamiliar technologies, and encouraging continuous research inside and outside the current focus fields. You will provide leadership in the development and strategic direction of new products, processes, technologies. Key Responsibilities Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions . Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Secondary Skills: Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies. Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 10 years. Must have Hands-on experience with Python and PySpark : 5+ years. Must have PL/SQL experience: 5+ years CloudFormation 5+ yrs. Terraform : 5+ years CI/CD GitHub actions: 5 years Experience with BI systems (PowerBI, Tableau): 1 year good to have Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 5 years Additionally, familiarity with any of the following is highly desirable: Jira, GitHub, Snowflake Why Join Us Be part of a dynamic and innovative team driving data-driven strategies in the TCRE product group Work with cutting-edge technologies and tools in cloud computing and data engineering Opportunity to solve complex data challenges and make a significant impact on the organizations Risk handling capabilities Ideal Candidates Ideal candidates have 10+ years of experience , strong problem-solving skills, familiarity with modern data engineering practices and are eager to contribute to the success of our TCRE product group. If you meet the qualifications and are excited about this opportunity, we encourage you to apply! Location : Gurgaon / Bangalore Mandatory Skills : 10+ yrs, Experience Mandatory Skills : Pyspark, PL/SQL CI/CD, Github, Terraform (good to have) Intersted Candidates can share their Resume at: kartikeya.verma@apsidatasolutions.com nupur.gupta@apsidatasolutions.com

Mock Interview

Practice Video Interview with JobPe AI

Start Pyspark Interview Now
Apsidata Solutions
Apsidata Solutions

Market Research

Noida Uttar Pradesh

51-200 Employees

5 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CTO

RecommendedJobs for You