At The Institute of Clever Stuff (ICS), we don’t just solve problems—we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai’s AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world’s leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we’re disrupting a $300+ billion industry, turning traditional consulting on its head. Context of work: The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets such as the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Scope of the work: Client project to deliver: Data Pipeline Development: Building new pipelines for data models using AWS Glue and PySpark. Leading on end-to-end data pipeline creation and execution Data Pipeline Management: Deploying new features into core data models that require re-deployment of the pipeline through staging environments (dev, pre-prod, prod). Supporting regular refreshes of the data. Data Model Performance: Leading on finding opportunities to optimise and automate data ingestion, data refreshes, and data validation steps for the data models. Data Modelling: Supporting the team in building new data models and solutions, working closely with data scientists. Data Quality Assurance: Establish processes to monitor data pipelines for data loss, corruption, or duplication and take corrective action. Requirements: Capable and confident in data engineering concepts: designing data models, building data warehouses, automating data pipelines, and managing large datasets. Strong background in data modelling, creating relational data models, data warehousing and ETL processes. The ability to design, build and manage efficient and reliable data pipelines. Strong coding best practices, including version control. Experience working in agile sprint-based delivery environments. Experience working with customer and transactional data. Experience collaborating with a mixed team of permanent client colleagues and other partners and vendors – working with: Data Scientists, Data Engineers, Data Analysts, Software Engineers, UI/UX Designers and internal Subject Matter Experts. Experience delivering to a large enterprise of stakeholders. Core Technologies: SQL, Python, PySpark/Spark SQL, AWS (Redshift, Athena, Glue, Lambda, RDS), AWS Serverless Data Lake Framework (SDLF), SQL client software (e.g. Dbeaver), Bazel (automated testing), Git. Nice-to-have Technologies: Databricks, Amazon SageMaker, Jupyter Notebook, MLOps, ML model development, and ML engineering would be advantageous.
At The Institute of Clever Stuff (ICS), we don’t just solve problems—we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai’s AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world’s leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we’re disrupting a $300+ billion industry, turning traditional consulting on its head. Key Responsibilities: Complete Data Modelling Tasks Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. Gain a comprehensive understanding of the EA extract. Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SME’s to finalize the Data Model (DM). Obtain email confirmation and approval for the finalized DM. Perform data modelling using ER Studio and STTM. Generate DDL scripts for data engineers to facilitate implementation. Complete Data Engineering Tasks Set up infrastructure for pipelines – this includes Glue Jobs, crawlers, scheduling, step functions etc. Build, deploy, test and run pipelines on demand in lower environments. Verify data integrity: no duplicates, all columns in final table etc. Write unit tests for methods used in pipeline and use standard tools for testing. Code formatting and linting. Collaborate with other Modelling Engineers to align on correct approach. Update existing pipelines for CZ tables (SDLF and OF) where necessary with new columns if they are required for EZ tables. Raise DDP requests to register databases and tables, and to load data into the raw zone. Create comprehensive good documentation. Ensure each task is accompanied by detailed notes specific to its functional area for clear tracking and reference. Analyse and manage bugs, and change requests raised by business/SMEs. Collaborate with Data Analyst and Virtual Engineers (VE) to refine and enhance semantic modelling in Power BI. Plan out work using Microsoft Azure, ADO. Dependencies, status and effort is correctly reflected. Required Skills: Proven experience in data modelling and data pipeline development. Proficiency with tools like ER Studio, STTM, AWS Glue, Redshift & Athena, and Power BI. Strong SQL and experience with generating DDL scripts. Experience working in SAP data environments. Experience in any of these domain areas is highly desirable: Logistics, Supply Planning, Exports and IFOT. Familiarity with cloud platforms, particularly AWS. Hands-on experience with DevOps and Agile methodologies (e.g., Azure ADO). Strong communication and documentation skills. Ability to work collaboratively with cross-functional teams.
Where: India (fully remote) Business Hours: UK hours Process: 1. ICS 1st Interview (30 minutes) 2. CV shared with client, followed by 1-2 further rounds Context of work: The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets such as the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Required Skills and Experience: Proven experience in data modelling and data pipeline development. Proficiency with tools like ER Studio, STTM, AWS Glue, Redshift & Athena, and Power BI. Strong SQL and experience with generating DDL scripts. Experience working in SAP data environments. Experience in any of these domain areas is highly desirable: Logistics, Supply Planning, Exports and IFOT. Familiarity with cloud platforms, particularly AWS. Hands-on experience with DevOps and Agile methodologies (e.g., Azure ADO). Strong communication and documentation skills. Ability to work collaboratively with cross-functional teams. Key Responsibilities Data Modelling Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. Gain a comprehensive understanding of the EA extract. Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SME’s to finalise the Data Model (DM). Obtain email confirmation and approval for the finalised DM. Perform data modelling using ER Studio and STTM. Generate DDL scripts for data engineers to facilitate implementation. Data Engineering Set up infrastructure for pipelines – this includes Glue Jobs, crawlers, scheduling, step functions, etc. Build, deploy, test and run pipelines on demand in lower environments. Verify data integrity: no duplicates, all columns in final table etc. Write unit tests for methods used in the pipeline and use standard tools for testing. Code formatting and linting. Collaborate with other Modelling Engineers to align on the correct approach. Update existing pipelines for CZ tables (e.g., Serverless Data Lake Framework (SDLF)) where necessary with new columns if they are required for EZ tables. Raise DDP requests to register databases and tables, and to load data into the raw zone. Other Create comprehensive good documentation. Ensure each task is accompanied by detailed notes specific to its functional area for clear tracking and reference. Analyse and manage bugs and change requests raised by business/SMEs. Collaborate with Data Analysts and Virtual Engineers (VE) to refine and enhance semantic modelling in Power BI. Plan out work using Microsoft Azure, ADO. Dependencies, status and effort are correctly reflected.
At The Institute of Clever Stuff (ICS), we don't just solve problemswe revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai's AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world's leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we're disrupting a $300+ billion industry, turning traditional consulting on its head. Key Responsibilities: Complete Data Modelling Tasks Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. Gain a comprehensive understanding of the EA extract. Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SME's to finalize the Data Model (DM). Obtain email confirmation and approval for the finalized DM. Perform data modelling using ER Studio and STTM. Generate DDL scripts for data engineers to facilitate implementation. Complete Data Engineering Tasks Set up infrastructure for pipelines this includes Glue Jobs, crawlers, scheduling, step functions etc. Build, deploy, test and run pipelines on demand in lower environments. Verify data integrity: no duplicates, all columns in final table etc. Write unit tests for methods used in pipeline and use standard tools for testing. Code formatting and linting. Collaborate with other Modelling Engineers to align on correct approach. Update existing pipelines for CZ tables (SDLF and OF) where necessary with new columns if they are required for EZ tables. Raise DDP requests to register databases and tables, and to load data into the raw zone. Create comprehensive good documentation. Ensure each task is accompanied by detailed notes specific to its functional area for clear tracking and reference. Analyse and manage bugs, and change requests raised by business/SMEs. Collaborate with Data Analyst and Virtual Engineers (VE) to refine and enhance semantic modelling in Power BI. Plan out work using Microsoft Azure, ADO. Dependencies, status and effort is correctly reflected. Required Skills: Proven experience in data modelling and data pipeline development. Proficiency with tools like ER Studio, STTM, AWS Glue, Redshift & Athena, and Power BI. Strong SQL and experience with generating DDL scripts. Experience working in SAP data environments. Experience in any of these domain areas is highly desirable: Logistics, Supply Planning, Exports and IFOT. Familiarity with cloud platforms, particularly AWS. Hands-on experience with DevOps and Agile methodologies (e.g., Azure ADO). Strong communication and documentation skills. Ability to work collaboratively with cross-functional teams.
MLOps Engineer (AWS/Azure) Role Type: Contract, Full-Time 8-Hour Day. Location: India (Fully Remote). Hours: Follow the UK time zone for business hours. Start Date: ASAP. End Date: End of December 2025 (with potential extension). Day Rate: To be discussed (GBP £ only). Process: 1. ICS first Interview (30 minutes). 2. CV shared with the client. 3. Meet the client and complete a technical interview/assessment. Context of the client The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets, including the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Context of the ICS: At The Institute of Clever Stuff (ICS), we don’t just solve problems... we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai , which blends human ingenuity with AI-powered intelligence to deliver smarter, faster, and more effective results. Essential Requirements 9+ years of relevant professional experience, including 5+ years in platform engineering, designing, deploying, and managing scalable, secure cloud infrastructure across both Azure and AWS. Strong grounding in governance, audit, observability, and compliance for cloud-based GenAI/ML ecosystems. Proven experience setting up and managing CI/CD using Azure DevOps or AWS CodePipeline . Proficiency with infrastructure‑as‑code (ARM/Bicep, Terraform, CloudFormation, CDK) and containerisation (Docker, Kubernetes). Advanced understanding of networking (DNS, load balancing, VPNs, VNets/VPCs) and security (IAM, RBAC, policies, SCPs). Solid programming skills in Python plus scripting ( Bash , PowerShell ); familiarity with mainstream AI/ML libraries (TensorFlow, PyTorch, scikit‑learn). Experience with cloud data stores and key management (Azure Blob, Cosmos DB, SQL, Key Vault; AWS S3, DynamoDB, RDS/KMS) and their integrations with AI services. Core Technical Expertise (Must Have): Azure & AWS ML/AI services: Azure ML, Azure AI Services, Azure AI Search; AWS SageMaker, AWS Bedrock, AWS Lambda. GenAI & Agentic ecosystems: Exposure to Generative AI and Agentic AI ecosystems, such as Azure OpenAI, Azure AI Foundry/Hub, Bedrock, Anthropic Claude, OpenAI API, LlamaCloud, LangChain. Security & identity: Azure Policy, Azure RBAC, AWS IAM, AWS SCPs; audit logging; least‑privilege design. IaC & platform automation: ARM/Bicep, Terraform, CloudFormation, CDK. DevOps/CI‑CD: Azure DevOps or AWS CodePipeline; integration and delivery for data science and ML workflows. Data & storage: Azure Blob/Cosmos/SQL/Key Vault; AWS S3/DynamoDB/RDS; understanding of OLTP and OLAP patterns. Containers & orchestration: Docker and Kubernetes (including AKS/EKS patterns and ECR/ACR usage). Monitoring & observability: Grafana, Prometheus, Azure Monitor, Application Insights, Log Analytics Workspaces. Networking: DNS management, load balancing, VPNs, virtual networks (VNets/VPCs). Testing: Unit and integration testing as part of CI/CD (ideally on Azure DevOps). ML tooling: Azure ML Studio, Python SDK (v2), CLI (v2) for monitoring, retraining, and redeployment. AI safety & evaluation: Token usage comprehension; prompt injection/jailbreak risks and mitigations; Azure AI Evaluation SDK; AI red‑teaming prompt security scans. Working Methods: Agile, sprint‑based delivery with Azure DevOps (boards, repos, pipelines). Strong DevOps and CI/CD pipeline management across environments. Close collaboration with Data Scientists, Data Analysts, Software Engineers, and platform teams. Clear documentation and communication suited to distributed teams. Stakeholder engagement to troubleshoot ML pipeline issues and support modelling infrastructure needs. Beneficial Experience: Developer productivity: GitHub Copilot, Cursor, Claude Code. Microsoft/Azure services: Azure Bot Framework, API Management, Application Gateway, M365 Copilot . AWS SDKs & tooling: Boto3 , AWS CDK. Notebooks & experimentation: Jupyter Notebook. ML frameworks: PyTorch, TensorFlow, scikit‑learn; practical E2E ML workflow design. Responsibilities Platform & Infrastructure Design, deploy, and manage scalable and secure cloud infrastructure across Azure and AWS using IaC (ARM/Bicep/Terraform/CloudFormation/CDK). Implement core networking (DNS, load balancing, VPNs, VNets/VPCs) and platform services for reliability and performance. Build and operate container platforms (Docker, Kubernetes; ACR/AKS and ECR/EKS patterns). Set up comprehensive monitoring and logging (Grafana, Prometheus, Azure Monitor, Application Insights, Log Analytics). Security & Compliance: Apply the principle of least privilege across cloud platforms (Azure RBAC, AWS IAM) and enforce policy (Azure Policy, AWS SCPs). Enable audit logging and controls appropriate for GenAI/ML workloads. Manage secrets and keys with Azure Key Vault and AWS KMS . CI/CD & Testing Implement CI/CD for data science/ML pipelines with Azure DevOps or AWS CodePipeline. Embed robust unit and integration testing in the pipeline; champion code quality and operational readiness. Infrastructure as Code (IaC) Define and evolve cloud resources as code; review and maintain standards, patterns, and reusable modules. Use Python or TypeScript where appropriate to codify infrastructure definitions. Cloud Services (AWS & Azure) AWS: RDS, DynamoDB, Redshift, Aurora; EC2 (scaling), EBS/EFS; serverless (Lambda, SQS, SNS, EventBridge, Step Functions); containers (ECR); Bedrock; SageMaker; CloudFormation (CDK); KMS. Azure: Cosmos DB, Azure SQL (including Serverless); compute (VMs, Scale Sets); serverless (Functions, Event Grid/Hub, Queue Storage, Service Bus); container services (ACR/AKS); Azure Resource Manager (ARM)/Bicep; Azure Key Vault; Azure Machine Learning; Azure Data Lake Storage. MLOps & Model Lifecycle Enable production models across the ML lifecycle (deployment, monitoring for drift, retraining, technical evaluation, and business validation). Implement CI/CD orchestration for data science pipelines and support model governance. Collaborate with stakeholders to resolve ML pipeline issues and evolve the modelling platform.