Posted:19 hours ago|
Platform:
On-site
Full Time
The mission of Global Analytics is to lead HEINEKEN into becoming a data-driven company and the best-connected brewer. As a team, we foster a data-driven entrepreneurial culture across the company. We act as an incubator for smart data products in all business areas from sales to logistics, marketing, and purchasing rapidly launching value-creating use cases, such as optimized spare parts management and
smarter media spending. This year, our focus is to scale these and other use cases to as many countries as possible around the globe.
Our team comprises data scientists, engineers, business intelligence specialists, and analytics translators, working together to deliver data-driven solutions across the business. We operate across three continents, with satellite teams in South Africa, Poland, India, and Singapore. We are collaborative, innovative, and reliable, embracing diverse cultures and perspectives. We strive to create an environment where team
members enjoy both the challenges they tackle and the people, they solve them with. We innovate to transform HEINEKEN from a traditional to a data-driven company. We position ourselves as a business partner, driving value-creating decisions and building trust in our solutions. If these challenges sound interesting and exciting, we invite you to apply. We have ambitious goals, and we need your help to achieve them. Position Overview We are looking for an experienced Senior Data Engineer with a deep understanding of Azure and Databricks platforms. The ideal candidate will have at least 8 years of experience in data engineering, with expertise in designing, developing, and maintaining data pipelines, optimizing data processing workflows, and ensuring the reliability and
scalability of our data infrastructure.
pipelines and ETL processes using Azure Data Factory and Databricks. Implement
automation and orchestration to streamline data processing.
support dynamic OpCo (Operating Company) deployments, enabling multitenant
scalability across the platform.
stakeholders to understand their data requirements and translate them into
technical specifications that drive analytics and machine learning initiatives.
performance, scalability, and reliability. Implement best practices for efficient
data storage and retrieval.
processes to ensure the accuracy, consistency, and integrity of data across
pipelines. Update and refine code testing strategies to continually improve data
quality and reliability.
pipelines to ensure timely and accurate data delivery, proactively identifying and
addressing potential issues.
unstructured to support various analytics, machine learning, and business
intelligence initiatives. Integrate new data pipeline executions with the HDP
Upload Portal to streamline data ingestion and ensure seamless pipeline
execution.
standards and best practices. Implement data governance frameworks and
manage data access controls.
practices, tools, and processes. Stay up to date with the latest developments in
Azure, Databricks, and data engineering technologies.
Education: Bachelor's degree in computer science, Engineering, Information
Technology, or a related field.
Experience: Minimum of 8 years of experience in data engineering, including
hands-on work with Azure Data Factory, Azure Databricks, and other Azure data
services.
Technical Proficiency: Strong SQL skills and experience with database
technologies such as Azure SQL Database or SQL Server, and familiarity with big
data processing frameworks like Apache Spark.
Programming Skills: Proficiency in one or more programming languages (such as
Python, Scala, etc.,), with an emphasis on writing clean, efficient, and scalable
code.
Data Modelling: Strong understanding of data warehousing concepts and data
modelling, with experience in creating and managing enterprise data warehouses.
Problem-Solving: Excellent problem-solving skills with the ability to work
independently as well as in a collaborative team environment.
Communication: Strong communication skills with the ability to effectively
collaborate with both technical and non-technical stakeholders to drive project
success.
Azure Synapse Analytics: Experience with data integration, warehousing, and
analytics using Azure Synapse.
Data Governance & Security: Knowledge of data governance, data security, and
compliance best practices, and experience implementing these in a data
engineering context.
DevOps & CI/CD: Familiarity with CI/CD pipelines and DevOps practices for data
engineering, including version control, automated testing, and continuous
deployment.
Multi-Cloud Experience: Experience with other cloud platforms such as Azure,
AWS or GCP is a plus, bringing additional versatility to the role.
Innovative Environment: Work in a dynamic environment where cutting-edge
technology and innovation are at the heart of everything we do.
Professional Growth: Opportunities for continuous learning, professional
development, and career advancement within a rapidly growing company.
Collaborative Culture: Be part of a diverse, creative team that values initiative
and collaboration.
Comprehensive Benefits: Competitive salary, comprehensive health benefits,
retirement plans, and other perks to support your work-life balance.
If you are passionate about data engineering and excited to work with Azure and
Databricks to build scalable data solutions, we invite you to apply. Join us and contribute
to our mission of leveraging data to drive business transformation.
HireFlex
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Salary: Not disclosed
Salary: Not disclosed