Work from Office
Full Time
About the role
As a Big Data Engineer, you will make an impact by identifying and closing consulting services in the major UK banks. You will be a valued member of the BFSI team and work collaboratively with manager, primary team and other stakeholders in the unit.
In this role, you will:
Collaborate with cross-functional teams to improve data ingestion, transformation, and validation workflows
Work closely with Data Engineers, Architects, and Analysts to understand data reconciliation requirements
Develop and implement PySpark programs to process large datasets in Big data platforms
Analyze and comprehend existing data ingestion and reconciliation frameworks
Perform complex transformations including reconciliation and advanced data manipulations
Fine-tune Spark jobs for performance optimization, ensuring efficient data processing at scale
Work model
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 3 days a week in a client or Cognizant office in Pune/Hyderabad location.
Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.
What you must have to be considered
Design and implement data pipelines, ETL processes, and data storage solutions that support data-intensive applications
Extensive hands-on experience with Python, PySpark
Good at Data Warehousing concepts & well versed with structured, semi structured (Json, XML, Avro, Parquet) data processing using Spark/Pyspark data pipelines
Experience working with large-scale distributed data processing, and solid understanding of Big Data architecture and distributed computing frameworks
Proficiency in Python and Spark Data Frame API, and strong experience in complex data transformations using PySpark
These will help you stand out
Able to leverage Python libraries such as cryptography or pycryptodome along with PySpark's User Defined Functions (UDFs) to encrypt and decrypt data within your Spark workflows
Should have worked on Data risk metrics in PySpark & Excellent at Data partitioning, Z-value generation, Query optimization, spatial data processing and optimization
Experience with CI/CD for data pipelines
Must have working experience in any of the cloud environment AWS/Azure/GCP
Proven experience in an Agile/Scrum team environment
Experience in development of loosely coupled API based systems
We're excited to meet people who share our mission and can make an impact in a variety of ways. Don't hesitate to apply, even if you only meet the minimum requirements listed. Think about your transferable experiences and unique skills that make you stand out as someone who can bring new and exciting things to this role.
Cognizant
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now27.5 - 42.5 Lacs P.A.
Bengaluru, Karnataka
Experience: Not specified
15.0 - 22.0 Lacs P.A.
Gurgaon, Haryana, India
Salary: Not disclosed
6.0 - 8.0 Lacs P.A.
Gurugram
7.0 - 17.0 Lacs P.A.
Hyderabad
8.0 - 11.0 Lacs P.A.
Pune, Chennai, Bengaluru
13.0 - 23.0 Lacs P.A.
Gurugram
9.5 - 18.0 Lacs P.A.
Hyderābād
20.0 - 22.0 Lacs P.A.
Hyderabad, Telangana, India
2.5 - 12.5 Lacs P.A.