I am running llama-cpp-python Version: 0.3.16 Trying to load the recently released model [embeddinggemma-300M](https://huggingface.co/unsloth/embeddinggemma-300m-GGUF) I get the following error message: `llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma-embedding'` Support for this model architecture has been added to llama.cpp in this build: [https://github.com/ggml-org/llama.cpp/releases/tag/b6384](https://github.com/ggml-org/llama.cpp/releases/tag/b6384) Could you please align llama-cpp-python to reflect this addition ?