India
None Not disclosed
On-site
Full Time
At The Institute of Clever Stuff (ICS), we don’t just solve problems—we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai’s AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world’s leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we’re disrupting a $300+ billion industry, turning traditional consulting on its head. Context of work: The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets such as the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Scope of the work: Client project to deliver: Data Pipeline Development: Building new pipelines for data models using AWS Glue and PySpark. Leading on end-to-end data pipeline creation and execution Data Pipeline Management: Deploying new features into core data models that require re-deployment of the pipeline through staging environments (dev, pre-prod, prod). Supporting regular refreshes of the data. Data Model Performance: Leading on finding opportunities to optimise and automate data ingestion, data refreshes, and data validation steps for the data models. Data Modelling: Supporting the team in building new data models and solutions, working closely with data scientists. Data Quality Assurance: Establish processes to monitor data pipelines for data loss, corruption, or duplication and take corrective action. Requirements: Capable and confident in data engineering concepts: designing data models, building data warehouses, automating data pipelines, and managing large datasets. Strong background in data modelling, creating relational data models, data warehousing and ETL processes. The ability to design, build and manage efficient and reliable data pipelines. Strong coding best practices, including version control. Experience working in agile sprint-based delivery environments. Experience working with customer and transactional data. Experience collaborating with a mixed team of permanent client colleagues and other partners and vendors – working with: Data Scientists, Data Engineers, Data Analysts, Software Engineers, UI/UX Designers and internal Subject Matter Experts. Experience delivering to a large enterprise of stakeholders. Core Technologies: SQL, Python, PySpark/Spark SQL, AWS (Redshift, Athena, Glue, Lambda, RDS), AWS Serverless Data Lake Framework (SDLF), SQL client software (e.g. Dbeaver), Bazel (automated testing), Git. Nice-to-have Technologies: Databricks, Amazon SageMaker, Jupyter Notebook, MLOps, ML model development, and ML engineering would be advantageous.
India
None Not disclosed
On-site
Full Time
At The Institute of Clever Stuff (ICS), we don’t just solve problems—we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai’s AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world’s leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we’re disrupting a $300+ billion industry, turning traditional consulting on its head. Key Responsibilities: Complete Data Modelling Tasks Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. Gain a comprehensive understanding of the EA extract. Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SME’s to finalize the Data Model (DM). Obtain email confirmation and approval for the finalized DM. Perform data modelling using ER Studio and STTM. Generate DDL scripts for data engineers to facilitate implementation. Complete Data Engineering Tasks Set up infrastructure for pipelines – this includes Glue Jobs, crawlers, scheduling, step functions etc. Build, deploy, test and run pipelines on demand in lower environments. Verify data integrity: no duplicates, all columns in final table etc. Write unit tests for methods used in pipeline and use standard tools for testing. Code formatting and linting. Collaborate with other Modelling Engineers to align on correct approach. Update existing pipelines for CZ tables (SDLF and OF) where necessary with new columns if they are required for EZ tables. Raise DDP requests to register databases and tables, and to load data into the raw zone. Create comprehensive good documentation. Ensure each task is accompanied by detailed notes specific to its functional area for clear tracking and reference. Analyse and manage bugs, and change requests raised by business/SMEs. Collaborate with Data Analyst and Virtual Engineers (VE) to refine and enhance semantic modelling in Power BI. Plan out work using Microsoft Azure, ADO. Dependencies, status and effort is correctly reflected. Required Skills: Proven experience in data modelling and data pipeline development. Proficiency with tools like ER Studio, STTM, AWS Glue, Redshift & Athena, and Power BI. Strong SQL and experience with generating DDL scripts. Experience working in SAP data environments. Experience in any of these domain areas is highly desirable: Logistics, Supply Planning, Exports and IFOT. Familiarity with cloud platforms, particularly AWS. Hands-on experience with DevOps and Agile methodologies (e.g., Azure ADO). Strong communication and documentation skills. Ability to work collaboratively with cross-functional teams.
India
None Not disclosed
Remote
Contractual
Where: India (fully remote) Business Hours: UK hours Process: 1. ICS 1st Interview (30 minutes) 2. CV shared with client, followed by 1-2 further rounds Context of work: The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets such as the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Required Skills and Experience: Proven experience in data modelling and data pipeline development. Proficiency with tools like ER Studio, STTM, AWS Glue, Redshift & Athena, and Power BI. Strong SQL and experience with generating DDL scripts. Experience working in SAP data environments. Experience in any of these domain areas is highly desirable: Logistics, Supply Planning, Exports and IFOT. Familiarity with cloud platforms, particularly AWS. Hands-on experience with DevOps and Agile methodologies (e.g., Azure ADO). Strong communication and documentation skills. Ability to work collaboratively with cross-functional teams. Key Responsibilities Data Modelling Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. Gain a comprehensive understanding of the EA extract. Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SME’s to finalise the Data Model (DM). Obtain email confirmation and approval for the finalised DM. Perform data modelling using ER Studio and STTM. Generate DDL scripts for data engineers to facilitate implementation. Data Engineering Set up infrastructure for pipelines – this includes Glue Jobs, crawlers, scheduling, step functions, etc. Build, deploy, test and run pipelines on demand in lower environments. Verify data integrity: no duplicates, all columns in final table etc. Write unit tests for methods used in the pipeline and use standard tools for testing. Code formatting and linting. Collaborate with other Modelling Engineers to align on the correct approach. Update existing pipelines for CZ tables (e.g., Serverless Data Lake Framework (SDLF)) where necessary with new columns if they are required for EZ tables. Raise DDP requests to register databases and tables, and to load data into the raw zone. Other Create comprehensive good documentation. Ensure each task is accompanied by detailed notes specific to its functional area for clear tracking and reference. Analyse and manage bugs and change requests raised by business/SMEs. Collaborate with Data Analysts and Virtual Engineers (VE) to refine and enhance semantic modelling in Power BI. Plan out work using Microsoft Azure, ADO. Dependencies, status and effort are correctly reflected.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.