Posted:5 hours ago|
Platform:
Work from Office
Full Time
1. Expertise in PySpark, Python
2. Strong proficiency in AWS services: Glue, Redshift, S3, Lambda, EMR, Kinesis.
3. Hands-on experience with ETL tools and data pipeline orchestration.
4. Proficiency in Python or Scala for data processing.
5. Knowledge of SQL and NoSQL databases.
6. Familiarity with data modeling and data warehousing concepts.
7. Experience with CI/CD pipelines
8. Understanding of security best practices for data in AWS.
9. Good hands on experience on Python, Numpy , pandas.
10. Experience in building ETL/ Data Warehouse transformation process.
11. Experience working with structured and unstructured data.
12. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries,
13. Developing scalable and re-usable, self-service frameworks for data ingestion and processing,
14. Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data,
Knowledge of big data frameworks (Spark, Hadoop).
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
bengaluru
5.0 - 15.0 Lacs P.A.
Experience: Not specified
Salary: Not disclosed
india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
pune, maharashtra
Salary: Not disclosed
madurai, tamil nadu, india
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
pune, maharashtra, india
Experience: Not specified
Salary: Not disclosed
kolkata, west bengal, india
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed