Work from Office
Full Time
If Interested please fill the below application link : https://forms.office.com/r/Zc8wDfEGEH Responsibilities: Deliver projects integrating data flows within and across technology systems. Lead data modeling sessions with end user groups, project stakeholders, and technology teams to produce logical and physical data models. Design end-to-end job flows that span across systems, including quality checks and controls. Create technology delivery plans to implement system changes. Perform data analysis, data profiling, and data sourcing in relational and Big Data environments. Convert functional requirements into logical and physical data models. Assist in ETL development, testing, and troubleshooting ETL issues. Troubleshoot data issues and work with data providers for resolution; provide L3 support when needed. Design and develop ETL workflows using modern coding and testing standards. Participate in agile ceremonies and actively drive towards team goals. Collaborate with a global team of technologists. Lead with ideas and innovation. Manage communication and partner with end users to design solutions. Required Skills: Must have: Total experience required 4-10 years (relevant experience minimum 5 years) 5 years of project experience in Python/Shell scripting in Data Engineering (experience in building and optimizing data pipelines, architectures, and data sets with large data volumes). 3+ years of experience in PySpark scripting, including the architecture framework of Spark. 3-5 years of strong experience in database development (Snowflake/ SQL Server/Oracle/Sybase/DB2) in designing schema, complex procedures, complex data scripts, query authoring (SQL), and performance optimization. Strong understanding of Unix environment and batch scripting languages (Shell/Python). Strong knowledge of Big Data/Hadoop platform. Strong engineering skills with the ability to understand existing system designs and enhance or migrate them. Strong logical data modeling skills within the Financial Services domain. Experience in data integration and data conversions. Strong collaboration and communication skills. Strong organizational and planning skills. Strong analytical, profiling, and troubleshooting skills. Good to Have: Experience with ETL tools (e.g Informatica, Azure Data Factory) and pipelines across disparate sources is a plus. Experience working with Databricks is a plus. Familiarity with standard Agile & DevOps methodology & tools (Jenkins, Sonar, Jira). Good understanding of developing ETL processes using Informatica or other ETL tools. Experience working with Source Code Management solutions (e.g., Git). Knowledge of Investment Management Business. Experience with job scheduling tools (e.g., Autosys). Experience with data visualization software (e.g., Tableau). Experience with data modeling tools (e.g., Power Designer). Basic familiarity with using metadata stores to maintain a repository of Critical Data Elements. (e.g. Collibra) Familiarity with XML or other markup languages. Mandatory skill sets: ETL,Python/Shell scripting , building pipelines,pyspark, database, sql Preferred skill sets: informatica, hadoop, databricks, collibra
PwC India
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections PwC India
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
Hyderābād
2.51046 - 7.5 Lacs P.A.