-
Notifications
You must be signed in to change notification settings - Fork 12.3k
convert : correct gemma 3n conversion #14450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, the conversion of pretrained Gemma 3n models fails like this:
python convert_hf_to_gguf.py google/gemma-3n-E4B/ --outfile models/gemma-3n-e4b/ggml-model-f16.gguf --outtype f16
Traceback (most recent call last):
File "llama.cpp/convert_hf_to_gguf.py", line 6718, in <module>
main()
File "llama.cpp/convert_hf_to_gguf.py", line 6712, in main
model_instance.write()
File "llama.cpp/convert_hf_to_gguf.py", line 410, in write
self.prepare_metadata(vocab_only=False)
File "llama.cpp/convert_hf_to_gguf.py", line 523, in prepare_metadata
self.set_vocab()
File "llama.cpp/convert_hf_to_gguf.py", line 4411, in set_vocab
with open(self.dir_model / "chat_template.jinja") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'google/gemma-3n-E4B/chat_template.jinja'
Oh hi - this looks fine! |
Fixed in #14508 |
* origin/master: Fix conditional enabling following arch checks for ggml-sycl (ggml-org#14504) convert : correct gemma 3n conversion (ggml-org#14450) kv-cache : use ggml_set_rows (ggml-org#14285) ggml : fix FA mask dim 2 and 3 (ggml-org#14505) ggml : remove kompute backend (ggml-org#14501) CUDA: add dynamic shared mem to softmax, refactor general usage (ggml-org#14497)
* convert : correct gemma 3n conversion * rm redundant code
* convert : correct gemma 3n conversion * rm redundant code
* convert : correct gemma 3n conversion * rm redundant code
Address problem reported by @danielhanchen : https://huggingface.co/unsloth/gemma-3n-E4B-it-GGUF/discussions/6#6860763b91da860136961b2b