Posted:2 months ago|
Platform:
On-site
Full Time
Description:
Data Engineer (AWS + pySpark)
JD as provided below
• 8+ years of overall IT experience, which includes hands on experience in Big Data technologies.
• Mandatory - Hands on experience in Python and PySpark.
• Build pySpark applications using Spark Dataframes in Python.
• Worked on optimizing spark jobs that processes huge volumes of data.
• Hands on experience in version control tools like Git.
• Worked on Amazon’s Analytics services like Amazon EMR, Amazon Athena, AWS Glue.
• Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.
• Good to have knowledge of datawarehousing concepts – dimensions, facts, schemas- snowflake, star etc.
• Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques – Snappy, Gzip.
• Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB.
VCM LOGISTICS PRIVATE LIMITED
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowkolkata, west bengal, india
Salary: Not disclosed
delhi, delhi, india
Salary: Not disclosed
bengaluru
5.0 - 15.0 Lacs P.A.
Experience: Not specified
Salary: Not disclosed
india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
pune, maharashtra
Salary: Not disclosed
madurai, tamil nadu, india
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
pune, maharashtra, india
Experience: Not specified
Salary: Not disclosed