Posted:3 hours ago|
Platform:
Hybrid
Full Time
Required Skill SetMust-Have: Strong experience with AWS Cloud Services (S3, Lambda, Glue, EMR, Redshift, etc.) Proficiency in Python and PySpark for data engineering tasks Strong SQL skills and a solid understanding of data warehousing concepts Experience in designing, developing, and maintaining ETL pipelines Good-to-Have: Working knowledge of Databricks or Apache Spark Familiarity with Terraform and/or AWS CloudFormation Excellent analytical, problem-solving, and communication skills Job ResponsibilitiesPrimary: Design, develop, and manage scalable ETL pipelines using AWS services Handle structured and semi-structured data using Python and PySpark Develop complex SQL queries for data transformation and reporting Collaborate effectively with data architects, analysts, and stakeholders Ensure best practices in terms of cost, security, and cloud resource documentation Secondary: Exposure to DevOps and CI/CD practices Experience with Databricks or other big data platforms Working knowledge of AWS monitoring and logging services (e.g., CloudWatch)
Consulting Krew
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowPune, Bengaluru
6.0 - 12.0 Lacs P.A.
5.0 - 15.0 Lacs P.A.
7.0 - 10.0 Lacs P.A.
8.0 - 12.0 Lacs P.A.
Bengaluru
48.0 - 84.0 Lacs P.A.
4.0 - 9.0 Lacs P.A.
10.0 - 20.0 Lacs P.A.
Hyderabad
0.5 - 1.25 Lacs P.A.
Pune, Bengaluru
6.0 - 12.0 Lacs P.A.
Bengaluru
12.0 - 16.0 Lacs P.A.