Job
Description
This role involves building, managing, and optimizing data pipelines, views, and models. You will work closely with teams to transition models into production for data and analytics consumers. Additionally, there is an opportunity to engage in visualisation requirements with clients to help them extract maximum value from their data. To be successful in this role, you must have experience in ETL/ELT techniques for integrating into Data Warehouse solutions. A solid understanding of Relational Databases and Data Warehouse methodology, particularly with Azure Data Factory, is essential. You should also possess knowledge of various architectures and methodologies like metadata management, performance management, and data quality management. Proficiency in creating and managing data environments in Snowflake, monitoring the data integration process, and working in a cloud architecture with data lakes is required. Excellent SQL skills are a must to develop and operate efficient, scalable, and reliable data pipelines. The ideal candidate should have working experience with Snowflake and Azure, as well as proficiency in C#, SQL, and Python. Understanding the Kimball dimensional model is desirable, and being a Certified Snowflake SnowPro is a plus. In the context of Snowflake, it is crucial to implement best practice solutions and stay updated with new updates regularly to ensure customers are kept informed. Snowflake undergoes frequent updates, and the data engineering team must stay current with these changes. Regarding Azure, familiarity with Azure Data Factory for big data ingest, Azure Functions (C#) for dynamic scaling, high velocity, middleware, ingest, or API, as well as Azure Storage (tables, blobs, queues), CosmosDB, Virtual Machines, Container Instances, Key Vault, and Azure DevOps is beneficial.,