Misc. bug: crashes when calling llama_state_get_size
on a reranking model
#13463
Labels
llama_state_get_size
on a reranking model
#13463
Name and Version
Compiled at commit 6562e5a,
but it also happens on the latest master version.
Operating systems
Mac
Which llama.cpp modules do you know to be affected?
libllama (core library)
Problem description & steps to reproduce
The process crashes with
SIGSEGV
when callingllama_state_get_size
on a context created with a reranking model (bge-reranker-v2-m3-Q8_0.gguf
in my tests).Here's a simple reproduction code:
First Bad Commit
The issue was introduced at commit 6562e5a (PR #13108)
Relevant log output
Last logs before the crash:
set_abort_callback: call llama_context: CPU output buffer size = 0.00 MiB llama_context: enumerating backends llama_context: backend_ptrs.size() = 3 llama_context: max_nodes = 65536 context created state_write_data: writing state state_write_data: - writing model info state_write_data: - writing output ids state_write_data: - writing logits state_write_data: - writing embeddings state_write_data: - writing KV self
From
lldb
:The text was updated successfully, but these errors were encountered: