Thanks to visit codestin.com
Credit goes to github.com

Skip to content

glm-4-9b-chat-1m mopdel issue: wrong shape #8467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
0wwafa opened this issue Jul 13, 2024 · 2 comments
Closed

glm-4-9b-chat-1m mopdel issue: wrong shape #8467

0wwafa opened this issue Jul 13, 2024 · 2 comments
Labels

Comments

@0wwafa
Copy link

0wwafa commented Jul 13, 2024

I converted and quantized glm-4-9b-chat-1m in the usual way.
when run with llama I get:

llm_load_print_meta: max token length = 1024
llm_load_tensors: ggml ctx size =    0.14 MiB
llama_model_load: error loading model: check_tensor_dims: tensor 'blk.0.attn_qkv.weight' has wrong shape; expected  4096,  4608, got  4096,  5120,     1,     1
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/content/glm-4-9b-chat-1m.q5_k.gguf'
main: error: unable to load model

same with all other quants.

I got no errors during conversion nor during quantizations.

@NoobPythoner
Copy link

NoobPythoner commented Jul 29, 2024

I have the same error.Is there anyone who can fix it ?

Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants