Posted:2 weeks ago|
Platform:
On-site
Full Time
Responsibilities Design and develop ETL pipelines using databricks workflows and delta live tables for data ingestion and transformation. Collaborate with Azure stack modules to handle large volumes of data. Write SQL, Python and PySpark code to meet data processing and transformation needs. Understand business requirements and create workflows that meet them. Knowledge on mapping documents and transformation business rules. Ensure continuous communication with the team and stakeholders regarding project status. Qualifications 4-7 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. Extensive hands-on experience with Azure services: Databricks, Data Factory, delta live tables and Azure SQL. Experience in SQL, Python, and PySpark for data transformation and processing. Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. Strong communication skills and attention to detail. Experience in the insurance or financial industry is preferred. Required Skills Azure Databricks PySpark Advanced SQL Delta live tables Show more Show less
ValueMomentum
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hyderabad, Telangana, India
Salary: Not disclosed
Salary: Not disclosed
Hyderabad, Telangana, India
Salary: Not disclosed
Salary: Not disclosed