Data Engineer [T500-21796]

2 years

0 Lacs

Posted:5 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About the company for Stolt Nielsen:

Stolt-Nielsen is a family-founded global leader in bulk-liquid and chemical logistics, transportation, and storage, known for its commitment to safety, service excellence, and innovation. At the Stolt-Nielsen Digital Innovation Centre, we bring this legacy into the future — building smarter systems, driving digital transformation, and shaping what's next for the world of logistics. Here, you’re encouraged to think boldly, challenge convention, and help move today’s products toward tomorrow’s possibilities.


About the Role:

We’re looking for a Data Engineer with strong Databricks, Python, and SQL skills to build and optimize the data infrastructure that powers our analytics and AI initiatives.

In this role, you’ll design and maintain reliable data pipelines, ensure data quality and performance, and enable data-driven decision-making across the organization. We believe that Data Engineers become only more relevant in the age of Generative AI, as we are transitioning to using solutions like Databricks Genie that rely on well-modelled and described data.

You’ll work closely with analysts, data scientists, and business stakeholders to turn raw data into trusted, accessible, and actionable insights.

At Stolt-Nielsen, we take Digital seriously. We invest in our teams through training and mentoring and enable them with the required tools to work in a modern way (Engineering laptops, Cursor licenses). As we are further building up our engineering practice, there is ample room for you to contribute, take initiative and shape the future of our ways of working and technology landscape.


What You’ll Do:

  • Design, develop, and maintain scalable data pipelines in Databricks using PySpark and Delta Lake.
  • Build robust ETL/ELT processes to ingest data from multiple sources (APIs, databases, files, cloud systems).
  • Optimize data models and storage for analytics, machine learning, and BI reporting.
  • Write and maintain SQL queries, views, and stored procedures for data transformation and analysis.
  • Collaborate with cross-functional teams to understand business needs and translate them into technical solutions.
  • Implement and enforce data quality, governance, and security standards across pipelines and platforms.
  • Monitor and troubleshoot data workflows for performance, reliability, and cost efficiency.
  • Stay current with best practices in data engineering, Databricks, and cloud-native data architectures.


What You’ll Bring:

  • Bachelor’s or master’s degree in computer science or a related field.
  • 2+ years of hands-on experience as a Data Engineer, ETL Developer, or similar.
  • Strong proficiency in Python (especially PySpark and pandas).
  • Excellent command of SQL for data manipulation, performance tuning, and debugging.
  • Hands-on experience with Databricks (workspace, notebooks, jobs, clusters, and Unity Catalog).
  • Solid understanding of data warehousing concepts, data modeling, and distributed data processing, like dimensional modelling (Kimball) and medallion architectures.
  • Experience with Azure PaaS ecosystem (data/storage).
  • Strong analytical mindset and attention to detail.
  • Effective communication skills and ability to work cross-functionally.
  • Experience in CI/CD pipelines, automated testing, and Agile delivery.


Must Have Skills:

  • Databricks (PySpark, Delta Lake, Jobs, Workflows, Unity Catalog/ Metadata)
  • Python/ PySpark (data processing and automation)
  • SQL (advanced querying and performance tuning)
  • Data pipeline orchestration (e.g., Databricks Workflows)
  • Version control (Git) and CI/CD familiarity
  • Data modeling and transformation design


Nice to have Skills:

  • Familiarity with dbt or modern data stack tools.
  • Knowledge of data governance, cataloging, and access control (e.g., Unity Catalog, Purview).
  • Exposure to machine learning workflows and ML model data preparation.
  • Strong background in performance tuning and cost optimization on Databricks.
  • ERP financial modules

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You