Senior Data Engineer

8 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Senior Data Engineer


About the Global Analytics Team

The mission of Global Analytics is to lead HEINEKEN into becoming a data-driven company and the best-connected brewer. As a team, we foster a data-driven entrepreneurial culture across the company. We act as an incubator for smart data products in all business areas from sales to logistics, marketing, and purchasing rapidly launching value-creating use cases, such as optimized spare parts management and smarter media spending. This year, our focus is to scale these and other use cases to as many countries as possible around the globe.

Our team comprises data scientists, engineers, business intelligence specialists, and analytics translators, working together to deliver data-driven solutions across the business. We operate across three continents, with satellite teams in South Africa, Poland, India, and Singapore. We are collaborative, innovative, and reliable, embracing diverse cultures and perspectives. We strive to create an environment where team members enjoy both the challenges they tackle and the people, they solve them with.

We innovate to transform HEINEKEN from a traditional to a data-driven company. We position ourselves as a business partner, driving value-creating decisions and building trust in our solutions. If these challenges sound interesting and exciting, we invite you to apply. We have ambitious goals, and we need your help to achieve them.


Position Overview

We are looking for an experienced Senior Data Engineer with a deep understanding of Azure and Databricks platforms. The ideal candidate will have at least 8 years of experience in data engineering, with expertise in designing, developing, and maintaining data pipelines, optimizing data processing workflows, and ensuring the reliability and scalability of our data infrastructure.


Key Responsibilities

  • Data Pipeline Development:

    Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data Factory and Databricks. Implement automation and orchestration to streamline data processing.
  • Dynamic OpCo Deployments:

    Refactor existing code and RESTful services to support dynamic OpCo (Operating Company) deployments, enabling multi-tenant scalability across the platform.
  • Collaboration:

    Work closely with data scientists, analysts, and business stakeholders to understand their data requirements and translate them into technical specifications that drive analytics and machine learning initiatives.
  • Optimization & Scalability:

    Optimize and manage data processing workflows for performance, scalability, and reliability. Implement best practices for efficient data storage and retrieval.
  • Data Quality & Validation:

    Implement robust data quality checks and validation processes to ensure the accuracy, consistency, and integrity of data across pipelines. Update and refine code testing strategies to continually improve data quality and reliability.
  • Monitoring & Troubleshooting:

    Monitor, troubleshoot, and optimize data pipelines to ensure timely and accurate data delivery, proactively identifying and addressing potential issues.
  • Data Handling:

    Work with large-scale datasets structured, semi-structured, and unstructured to support various analytics, machine learning, and business intelligence initiatives. Integrate new data pipeline executions with the HDP Upload Portal to streamline data ingestion and ensure seamless pipeline execution.
  • Security & Compliance:

    Ensure data security and compliance with industry standards and best practices. Implement data governance frameworks and manage data access controls.
  • Continuous Improvement:

    Continuously improve and evolve data engineering practices, tools, and processes. Stay up to date with the latest developments in Azure, Databricks, and data engineering technologies.


Required Qualifications

  • Education:

    Bachelor’s degree in computer science, Engineering, Information Technology, or a related field.
  • Experience:

    Minimum of 8 years of experience in data engineering, including hands-on work with Azure Data Factory, Azure Databricks, and other Azure data services.
  • Technical Proficiency:

    Strong SQL skills and experience with database technologies such as Azure SQL Database or SQL Server, and familiarity with big data processing frameworks like Apache Spark.
  • Programming Skills:

    Proficiency in one or more programming languages (such as Python, Scala, etc.,), with an emphasis on writing clean, efficient, and scalable code.
  • Data Modelling:

    Strong understanding of data warehousing concepts and data modelling, with experience in creating and managing enterprise data warehouses.
  • Problem-Solving:

    Excellent problem-solving skills with the ability to work independently as well as in a collaborative team environment.
  • Communication:

    Strong communication skills with the ability to effectively collaborate with both technical and non-technical stakeholders to drive project success.


Preferred Qualifications

  • Azure Synapse Analytics:

    Experience with data integration, warehousing, and analytics using Azure Synapse.
  • Data Governance & Security:

    Knowledge of data governance, data security, and compliance best practices, and experience implementing these in a data engineering context.
  • DevOps & CI/CD:

    Familiarity with CI/CD pipelines and DevOps practices for data engineering, including version control, automated testing, and continuous deployment.
  • Multi-Cloud Experience:

    Experience with other cloud platforms such as Azure, AWS or GCP is a plus, bringing additional versatility to the role.


What We Offer

  • Innovative Environment:

    Work in a dynamic environment where cutting-edge technology and innovation are at the heart of everything we do.
  • Professional Growth:

    Opportunities for continuous learning, professional development, and career advancement within a rapidly growing company.
  • Collaborative Culture:

    Be part of a diverse, creative team that values initiative and collaboration.
  • Comprehensive Benefits:

    Competitive salary, comprehensive health benefits, retirement plans, and other perks to support your work-life balance.

Join Our Team

If you are passionate about data engineering and excited to work with Azure and Databricks to build scalable data solutions, we invite you to apply. Join us and contribute to our mission of leveraging data to drive business transformation.


Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

bengaluru, karnataka, india

serilingampalli, telangana, india

bengaluru, karnataka, india