Not Applicable
Specialism
Data, Analytics & AI
& Summary
.
In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage.
Responsibilities
As a Software Development Engineer 3 at Tesco; you hold a senior Individual Contributor role; demonstrating active technical leadership with proven impact across teams and the wider directorate. You take ownership and accountability for product development within your domain; contributing to organizational capabilities through coaching; mentoring; and involvement in hiring processes. In this position; you drive the development of data engineering solutions; ensuring a balance between functional and nonfunctional requirements. Your responsibilities include planning and leading data engineering activities for strategic; large; and complex programs; monitoring the application of data standards and architectures; and contributing to organisational policies and guidelines for data engineering. As a technical leader; you play a crucial role in influencing technology choices; providing perspective across teams; and actively participating in critical product design and architecture efforts. You excel in delivering impactful projects; demonstrating technical expertise; and contributing to the maturity of development processes. Comfortable in managing priorities and navigating ambiguity; you utilize datadriven decisionmaking and effectively communicate with your team and stakeholders. Working within a multidisciplinary agile team; you are handson with product implementation across the entire stack; including infrastructure and platformrelated efforts. Your role involves developing scalable data applications; ETL; Data Lake implementation; and Analytics processing pipelines. Additionally; you lead a team of Data Engineers in a technical capacity; providing guidance and mentorship to junior and midlevel team members.
What youll do Big data development experience for large enterprise Experience in building streaming pipeline using spark streaming; Kafka/event hub or similar technology stack Experience to develop; implement and tune distributed data processing pipelines that process large volume of data Strong experience in Architecture and Design for building enterprise scale product with focus on scalability; lowlatency; and faulttolerance. Expertise in writing complex; highlyoptimized SQL queries & Data Pipelines across large data sets Must be able to work effectively in a team setting as well as individually. Ability to communicate and collaborate with external teams and stakeholders. Must have growth mindset to learn and contribute across different modules or services.
What youll bring Working knowledge of Big Data Engineering; Ingestion & Integration with Third party / in house Data sources; built Metadata Management; Data Quality; Master Data Management Solutions;Structured / Unstructured data Excellent written and oral communications kills. Selfstarter with quick learning abilities. Multitask and should be able to work under stringent deadlines. Ability to understand and work on various internal systems. Ability to work with multiple stakeholders. Leading multiskilled agile team and delivering high visibility and high impact projects
Mandatory skill sets
Spark; Scala or Java (Scala preferred); Cloud Native; SQL and Data structures
Preferred skill sets
Experience in Hadoop; Spark and Distributed computing frameworks. Professional handon experience in Scala. Professional handon experience in Sql and Query Optimization. Strong computer science fundamentals Data Structure and Algorithms Experience in programming design patterns. Experience in system design. Should have Experience with CI/CD tool like Git; Docker and Jenkins etc. Should have handon experience in Data processing and Data manipulation skills like Data warehousing concepts; SCD types etc. At least one cloud exposure (Azure is preferred). Exposure to Streaming data usecases via kafka or structured streaming etc. Experience of NoSQL; Messaging Queue; Orchestration Framework (Airflow/Oozie) Exposure to multihop Architecture would be added advantage Working knowledge of Ceph; Docker; Kubernetes & Kafka Connect is added advantage Working experience with Data Lakes; Medallion Architecture; Parquet & Apache Iceberg is desirable Prefer to have Experience in Data Governance tools such as Alation; Collibra; Data Quality;Data Lineage and MetaData.
Years of experience required
8 to 11.
Education qualification
B.E, B.Tech, MCA, M.Tech, M.E
Bachelors degree in computer science; information technology; engineering; information systems and 8+ years experience in software engineering or related area at a technology; retail; or datadriven company.
Education
Degrees/Field of Study required Bachelor of Technology, Bachelor of Engineering
Degrees/Field of Study preferred
Required Skills
Structured Query Language (SQL)
Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 21 more}
Travel Requirements
Available for Work Visa Sponsorship