Work on data processing, ML inference pipelines, and API development . This is a practical engineering role focused on clean, scalable code , not ML research. You will work on: Text/data parsing Classification pipelines Model inference services (FastAPI) Performance optimisation (batching, vectorisation) Small ML/NLP models in production environments Key Skills (Must Have) Python (3+ years) production environment FastAPI or Flask pandas / numpy (data cleaning / preprocessing) scikit-learn (basic ML) Text processing / regex / NLP fundamentals Docker & deployment basics Model optimisation or inference experience (ONNX / batching / vectorisation) Good to Have (Optional) Hugging Face transformers (basic usage) Experience with embeddings or multi-label classification Exposure to ONNX Runtime / quantization AWS / GCP deployment basics Git, CI/CD, testing experience Responsibilities Build and maintain Python microservices for ML inference Implement batching, caching, and performance improvements Parse real-world product data and clean text fields Integrate lightweight ML/NLP models into APIs Work closely with international backend + data teams What We Offer Work on a global SaaS platform used by chefs worldwide Modern tech: FastAPI, model optimisation, lightweight NLP Opportunities to grow into ML Engineering or MLOps Collaborative international culture Why This Role is Exciting Have you ever wanted to build ML pipelines using real-world messy data , not just Kaggle? Would you enjoy owning a full flow: parse clean infer output ? Interested in learning ONNX, quantisation, optimisation, and batch inference ? Want to work on a global SaaS product used daily by chefs, dietitians, and foodservice operators? Keen to modernise a 20-year-old platform using Python, FastAPI, and practical AI?