5 years minimum in software development background, with major experience in intense: Business requirements analysis and data modelling Data querying, particularly with SQL query engines Big data analytics data processing, data lake / lakehouse architecture Hands-on experience of Big Data opensource technologies such as: Apache Kafka, Apache Pekko, Apache Spark & Spark Structured Streaming, Delta Lake or Apache Iceberg, AWS Athena, Trino, MongoDB, AWS S3, MinIO S3 Proven successful hand-on experience of: setting up data governance tooling and processes (schema registry, data lineage control) and data access control setting up data pipelines for model training and inference Kubernetes or Openshift in context of Big Data analytics, AWS service