Job
Description
The role will require a unique blend of strong DataOps technical and design skills to translate business decisions into data requirements. This individual will build a deep understanding of the infrastructure data we use in order to work across the ID&A team and key stakeholders the appropriate data to tell a data story. This includes implementing and maintaining a data architecture that follows data management best practices for ensuring data ingestion, transformation, storage and analytics are handled according to their specific purpose using the appropriate tools: ingestion captures raw data without applying business logic, transformation processes data discretely for auditability, and analytical queries retrieve structured outputs without relying on upstream processes. They will be responsible for building and automating data pipelines to maximize data availability and efficiency, as well as migrating the data model and transformations to our target architecture. This individual will bring passion for data-driven decisions, enterprise solutions, and collaboration to the role, transforming platform data into actionable insights by utilizing data engineering and data visualization best practices. Key responsibilities include: Data Architecture: Perform all technical aspects of data architecture and database management for ID&A, including developing data pipelines, new database structures and APIs as applicable Data Design: Translate logical data architectures into physical data designs, ensuring alignment with data modeling best practices and standards Data Process and Monitoring: Ensure proper data ingestion, validation, testing, and monitoring for ID&A data lake Data Quality Testing: Develop and provide subject matter expertise on data analysis, testing and Quality Assurance (QA) methodologies and processes Support database and data platform administration for initiatives building, enhancing, or maintaining databases, data warehouses and data pipelines Data Migration: Design and support migration to a technology-agnostic data model that decouples data architecture from specific tools or platform used for storage, processing, or access Data Integrity: Ensure accuracy, completeness, and data quality, independent of upstream or downstream systems; collaborate with data owners to validate and refine data sources where applicable Agile Methodologies: Function as a senior member of an agile feature team and manage data assets as per the enterprise standards, guidelines and policies Collaboration: Partner closely with business intelligence team to capture and define data requirements for new and enhanced data visualizations Prioritization: Work with product teams to prioritize new features for ongoing sprints and manage backlog Continuous Improvement: Monitor performance and make recommendations for areas of opportunity/improvement for automation and tool usage Compliance: Understand and abide by SDLC standards and policies Liaison: Act as point of contact for data-related inquiries and data access requests Innovation: Leverage the evolving technical landscape as needed, including AI, Big Data, Machine Learning and other technologies to deliver meaningful business insights Minimum Requirements: 4+ years of DataOps engineering experience in implementing pipeline orchestration, data quality monitoring, governance, security processes, and self-service data access Experience managing databases, ETL/ELT pipelines, data lake architectures, and real-time processing Proficiency in API development and stream processing frameworks Hands-on coding experience in Python Hands-on expertise with design and development across one or more database management systems (e.g. SQL Server, PostgreSQL, Oracle) Testing and Troubleshooting: Ability to test, troubleshoot, and debug data processes Strong analytical skills with a proven ability to understand and document business data requirements in complete, accurate, extensible and flexible logical data models, data visualization tools (e.g. Apptio BI, PowerBI) Ability to write efficient SQL queries to extract and manipulate data from relational databases, data warehouses and batch processing systems Experience in data quality and QA testing methodologies Fluent in data risk, management, and compliance terminology and best practices Proven track record for managing large, complex ecosystems with multiple stakeholders Self-starter who is able to problem-solve effectively, organize and document processes, and prioritize feature with limited guidance An enterprise mindset that connects the dots across various requirements and the broader operations/infrastructure data architecture landscape Excellent collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross-team communication Understanding of complex software delivery including build, test, deployment, and operations; conversant in AI, Data Science, and Business Intelligence concepts and technology stack Exposure to distributed (multi-tiered) systems, algorithms, IT asset management, cloud services, and relational databases Foundational Public Cloud (AWS, Google, Microsoft) certification; advanced Public Cloud certifications a plus Experience working in technology business management, technology infrastructure or data visualization teams a plus Experience with design and coding across multiple platforms and languages a plus Bachelors Degree in computer science, computer science engineering, data engineering, or related field required; advanced degree preferred