Posted:1 day ago|
Platform:
Work from Office
Full Time
1. Expertise in PySpark, Python
2. Knowledge and Skill in Databricks
3. Strong proficiency in Azure services: Data Factory, Synapse, Data Lake, Databricks.
4. Hands-on experience with ETL tools and data pipeline orchestration.
5. Proficiency in Python or Scala for data processing.
6. Knowledge of SQL and NoSQL databases.
7. Familiarity with data modeling and data warehousing concepts.
8. Understanding of security best practices for data in AWS.
9. Good hands on experience on Python, Numpy , pandas.
10. Experience in building ETL/ Data Warehouse transformation process.
11. Experience working with structured and unstructured data.
12. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries,
13. Developing scalable and re-usable, self-service frameworks for data ingestion and processing,
14. Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data,
Knowledge of big data frameworks (Spark, Hadoop).
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
bengaluru
15.0 - 30.0 Lacs P.A.
bengaluru
5.0 - 10.0 Lacs P.A.
chennai
5.0 - 10.0 Lacs P.A.
bengaluru
10.0 - 13.0 Lacs P.A.
hyderabad
10.0 - 13.0 Lacs P.A.
chennai
10.0 - 13.0 Lacs P.A.
10.0 - 13.0 Lacs P.A.
bangalore rural
5.0 - 10.0 Lacs P.A.
mumbai
5.0 - 10.0 Lacs P.A.
hyderabad, chennai, bengaluru
5.0 - 10.0 Lacs P.A.