Ollama now offers on‑device inference for image generation via the Flux 2‑Klein model (https://ollama.com/x/flux2-klein). Integrating this capability into presenton/presenton would enable a truly offline workflow, eliminating external API calls and preserving user privacy.