Posted:17 hours ago|
Platform:
Work from Office
Full Time
1. Expertise in PySpark, Python
2. Strong proficiency in AWS services: Glue, Redshift, S3, Lambda, EMR, Kinesis.
3. Hands-on experience with ETL tools and data pipeline orchestration.
4. Proficiency in Python or Scala for data processing.
5. Knowledge of SQL and NoSQL databases.
6. Familiarity with data modeling and data warehousing concepts.
7. Experience with CI/CD pipelines
8. Understanding of security best practices for data in AWS.
9. Good hands on experience on Python, Numpy , pandas.
10. Experience in building ETL/ Data Warehouse transformation process.
11. Experience working with structured and unstructured data.
12. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries,
13. Developing scalable and re-usable, self-service frameworks for data ingestion and processing,
14. Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data,
Knowledge of big data frameworks (Spark, Hadoop).
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
hyderabad, pune
15.0 - 30.0 Lacs P.A.
12.0 - 22.0 Lacs P.A.
6.0 - 16.0 Lacs P.A.
indore, hyderabad, pune
10.0 - 20.0 Lacs P.A.
hyderabad
7.0 - 17.0 Lacs P.A.
bengaluru
7.0 - 17.0 Lacs P.A.
chennai
7.0 - 17.0 Lacs P.A.
hyderabad, gurugram
14.0 - 20.0 Lacs P.A.
hyderabad
15.0 - 30.0 Lacs P.A.
hyderabad, chennai, bengaluru
6.5 - 14.0 Lacs P.A.