Posted:1 month ago|
Platform:
On-site
Full Time
Strong hands-on experience in Python programming and PySpark.
Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem.
Experience in writing and optimizing SQL for data manipulations.
Good Exposure to scheduling tools.
Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection.
Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed.
Knowledge of Database technologies for OLTP and OLAP workloads.
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now25.0 - 30.0 Lacs P.A.
navi mumbai, maharashtra, india
Salary: Not disclosed
pune
13.0 - 18.0 Lacs P.A.
navi mumbai, maharashtra
Salary: Not disclosed
gurgaon
Salary: Not disclosed
bengaluru
54.0 - 72.0 Lacs P.A.
hyderabad
40.0 - 45.0 Lacs P.A.
chennai, tamil nadu, india
Salary: Not disclosed
hyderabad
13.0 - 18.0 Lacs P.A.
bengaluru, karnataka
Salary: Not disclosed