Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: robbiemu/llama.cpp

b5602

06 Jun 18:06
745aa53
Compare
Choose a tag to compare
llama : deprecate llama_kv_self_ API (#14030)

* llama : deprecate llama_kv_self_ API

ggml-ci

* llama : allow llama_memory_(nullptr)

ggml-ci

* memory : add flag for optional data clear in llama_memory_clear

ggml-ci

b5437

20 May 18:32
b7a1746
Compare
Choose a tag to compare
mtmd-helper : bug fix to token batching in mtmd (#13650)

* Update mtmd-helper.cpp

* Update tools/mtmd/mtmd-helper.cpp

Co-authored-by: Xuan-Son Nguyen <[email protected]>

---------

Co-authored-by: Xuan-Son Nguyen <[email protected]>

b5435

20 May 15:29
a4090d1
Compare
Choose a tag to compare
llama : remove llama_kv_cache_view API + remove deprecated (#13653)

ggml-ci

b5426

19 May 21:36
1dfbf2c
Compare
Choose a tag to compare
common : add load_progress_callback (#13617)

b5400

15 May 18:32
c6a2c9e
Compare
Choose a tag to compare
gguf : use ggml log system (#13571)

* gguf : use ggml log system

* llama : remove unnecessary new lines in exception messages

b5393

15 May 13:31
3cc1f1f
Compare
Choose a tag to compare
webui : handle PDF input (as text or image) + convert pasted long con…