4 - 8 years
0 Lacs
Posted:1 day ago|
Platform:
On-site
Full Time
Job Description:
We are looking for a skilledPySparkDeveloperwith experience in Azure Databricks (ADB) and Azure Data Factory (ADF) to join our team. The ideal candidate will play a crucial role in designing, developing, and implementing data solutions usingPySparkfor large-scale data processing and analytics.
Responsibilities:
Design, develop, and deployPySparkapplications and workflows on Azure Databricks for data transformation, cleansing, and aggregation.
Implement data pipelines using Azure Data Factory (ADF) to orchestrate ETL/ELT processes across heterogeneous data sources.
CollaboratewithData Engineers and Data Scientists to integrate and process structured and unstructured data sets into actionable insights.
OptimizePySparkjobs and data pipelines for performance, scalability, and reliability.
Conduct regular financial risk assessments to identify potential vulnerabilities in data processing workflows.
Ensure data quality and integrity throughout all stages of data processing.
Develop and implement strategies to mitigate financial risks associatedwithdata transformation and aggregation.
Troubleshoot and debug issues related to data pipelines and processing.
Ensure compliancewithregulatory requirements and industry standards in all data processing activities.
Implement best practices for data security, compliance, and privacywithin Azure environment.
Document technical specifications, data flows, and solution architecture.
Requirements:
Bachelors degree in Computer Science, Engineering, or a related field; Masters degree preferred.
Proven experience as aPySparkDeveloper or similar rolewitha strong understanding of Apache Spark internals.
Hands-on experiencewithAzure Databricks (ADB) and Azure Data Factory (ADF).
Proficiency in Python programming language and solid understanding of SQL.
Experience designing and optimizing data pipelines for ETL/ELT processes.
Familiaritywithcloud platforms, preferably Microsoft Azure.
Excellent problem-solving skills and ability to think critically.
Strong communication skillswiththe ability to collaborate effectively in a team environment.
Experience in Financial, Risk, Compliance, or Banking domains is a plus.
Experience identifying and mitigating financial risks in data processes.
Ability to analyse data for potential risk factors and develop strategies to minimize financial risk.
Ensure all data processes complywithrelevant regulatory requirements and industry standards.
Preferred Qualifications:
Certification in Azure Data Engineering or related field.
Knowledge of other big data technologies such as Hadoop, Hive, or Kafka.
Familiaritywithmachine learning frameworks and techniques.
CAPGEMINI TECHNOLOGY SERVICES INDIA LIMITED
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowbengaluru
12.0 - 22.0 Lacs P.A.
6.0 - 10.0 Lacs P.A.
chennai, tamil nadu, india
Salary: Not disclosed
pune, bangalore, hyderabad
0.00011 - 0.00016 Lacs P.A.
pune, bangalore, hyderabad
0.00013 - 0.00018 Lacs P.A.
pune, maharashtra, india
Salary: Not disclosed
bengaluru, karnataka, india
Salary: Not disclosed
karnataka
Salary: Not disclosed
kolkata, hyderabad, bengaluru
20.0 - 35.0 Lacs P.A.
gurugram
25.0 - 40.0 Lacs P.A.