Join our digital revolution in NatWest Digital XIn everything we do, we work to one aim. To make digital experiences which are effortless and secure.So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter.Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive.This role is based in India and as such all normal working days must be carried out in India.Join us as a Software Engineer, PySpark
- This is an opportunity for a driven Software Engineer to take on an exciting new career challenge
- Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority
- It’s a chance to hone your existing technical skills and advance your career
- We're offering this role at associate vice president level
What you'll do
In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform.You’ll also be:
- Producing complex and critical software rapidly and of high quality which adds value to the business
- Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning
- Collaborating to optimise our software engineering capability
- Designing, producing, testing and implementing our working code
- Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations
The skills you'll need
To take on this role, you’ll need a background in software engineering, software design and architecture and an understanding of how your area of expertise supports our customers. You'll need at least eight years of experience in designing, developing and maintaining robust ETL pipelines using PySpark to process and transform data from various sources.You'll also need experience in AWS services such as Amazon S3, AWS Glue and Amazon Redshift to store, manage, and analyze data efficiently while monitoring and optimizing data processing workflows for performance and scalability, using best practices in PySpark and AWS.You’ll also need:
- Experience of working with development and testing tools, bug tracking tools and wikis
- Experience in multiple programming languages or low code toolsets
- Experience of DevOps, Testing and Agile methodology and associated toolsets
- A background in solving highly complex, analytical and numerical problems
- Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance