Data Engineer / Azure, PySpark (Contract – Remote, UK Client) We are seeking an experienced Data Engineer with strong PySpark and SQL expertise to help modernize legacy reporting logic into scalable data pipelines on Azure Databricks. This role is for a UK-based client, and requires excellent hands-on experience with distributed data processing and Azure data services. Key Responsibilities Convert and optimize legacy Crystal Reports and SQL stored procedures into PySpark pipelines. Build and maintain data workflows in Azure Databricks and Azure Data Factory (ADF). Integrate and align transformed data with Azure Synapse. Conduct unit, integration, and regression testing to ensure output accuracy. Collaborate with analysts, QA, and architects to refine logic and resolve mismatches. Document data flows, logic mappings, and validation results. Required Skills 5+ years in data engineering or ETL development. Strong PySpark and SQL (complex transformations, performance tuning). Hands-on experience with Azure Databricks, ADF, and Synapse. Solid understanding of dimensional data modeling (star/snowflake). Experience with Git and CI/CD workflows. Nice to Have Power BI dataset design & optimisation Unity Catalog / governance knowledge Experience with modernization or data migration projects Details Client: UK-based Contract: 12 months Location: Remote (UK/US overlap required) Team: Agile squad