Job Description
About this role:
Wells Fargo is seeking a Software Engineer (Data Engineering/Generative AI)
The Finance Technology Team within Enterprise Functions Technology (EFT) is seeking a Senior Data Engineer to join our Profit View team at Wells Fargo. As a Senior Software Engineer in the Profit View Data Engineering team, candidate will play a pivotal role in the design, development, and maintenance of our metadata-driven data engineering frameworks. Candidate will work independently to deliver critical project tasks, focusing on building, enhancing, and troubleshooting robust data pipelines, APIs, Gen AI solutions and wrapper capabilities for the Profit View project. This role is essential for ensuring best development practices and strong validations during, Modernization of our current tech stack, Gen AI solution that we are building, Data Center exit migrations and DPC onboarding. Candidate will collaborate with cross-functional teams to drive the implementation of scalable, high-performance data solutions using Python, Java, Gen AI, SQL, Apache Spark, Iceberg, Dremio, Power BI and Autosys. Enterprise Finance & Technology is a collaborative, cross-functional, Agile organization that is looking for independent thinkers willing to drive innovative solutions for business problems through Gen AI, data delivery and data management.
In this role, you will:
- Participate in low to moderately complex initiatives and projects associated with the technology domain, including installation, upgrades, and deployment efforts
- Identify opportunities for service quality and availability improvements within the technology domain environment
- Design, code, test, debug, and document for low to moderately complex projects and programs associated with technology domain, including upgrades and deployments
- Review and analyze technical assignments or challenges that are related to low to medium risk deliverables and that require research, evaluation, and selection of alternative technology domains
- Present recommendations for resolving issues or may escalate issues as needed to meet established service level agreements
- Exercise some independent judgment while also developing understanding of given technology domain in reference to security and compliance requirements
- Provide information to technology colleagues, internal partners, and stakeholders
Required Qualifications:
- 2+ years of software engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
Desired Qualifications:
- At least 2 years of experience working with any RDBMS
- At least 2 years of experience in building big data pipelines.
- At least 2 years of experience working with Apache Spark, Java, Hive, and Hadoop
- At least 1+ years of experience in using LLMs for automation.
- Strong experience with programming in Python, Java, SQL and good understanding of bash scripting for data processing and automation
- Hands-on experience with Apache Spark for large-scale data processing.
- Experience with Autosys or similar job scheduling/orchestration tools.
- Experience in working with CI/CD pipelines, improving code coverage and vulnerability remediation
- 2+ years of experience in Agile mode of working
- Decent understanding on workings of Rest APIs, Dremio and Object store
- Proven ability to independently deliver complex project tasks and solutions.
- Solid understanding of data engineering best practices, including validation and quality assurance.
- Excellent troubleshooting and problem-solving skills.
- Proficiency in data modeling and database design
- Experience in working with Open table formats like (Iceberg, Delta)
- Knowledge of data governance, security, and compliance requirements.
- Hands on Experience with GenAI use cases.
- Experience with cloud data platforms (e.g., AWS, Azure, GCP).
- Exposure to financial services or Commerical and corporate investment banking domains
Job Expectations:
- This role is essential for ensuring best practices and strong validations during Profit View modernization, Successful delivery of GEN AI project (iCFO), Data Center exit migrations and DPC onboarding. Candidate will collaborate with cross-functional teams to drive the implementation of scalable, high-performance data solutions using Python, Java, SQL, Apache Spark, Iceberg, Dremio and Autosys.