Releases: kherud/java-llama.cpp
Releases · kherud/java-llama.cpp
Version 4.1.0
Big credit to @vaiju1981 for new features:
- Now with support for Gemma 3
- Update to llama.cpp b4916
- Re-ranking and chat-template support!
Version 4.0.0
This major version updates from b3534 to the newest available llama.cpp version b4831.
- Huge credit to @vaiju1981 for enabling this update
- Credit to @glebashnik for exposing a function to convert json schemas to grammars
Version 3.4.1
This version is a minor fix for problems with the pre-built shared libraries on Linux x86_64.
Version 3.4.0
Version 3.4.0
Credit goes to @shuttie for adding CUDA support on Linux x86_64 with this version.
Version 3.3.0
Upgrade to latest llama.cpp version b3534
Version 3.2.1
- Include GGML backend in text log
- Update to llama.cpp b3008
Version 3.2.0
Logging Re-Implementation (see #66)
- Re-adds logging callbacks via
LlamaModel#setLogger(LogFormat, BiConsumer<LogLevel, String>)
- Removes dis-functional
ModelParameters#setLogDirectory(String)
,ModelParameters#setDisableLog(boolean)
, andModelParameters#setLogFormat(LogFormat)
Version 3.1.1
Version 3.1.0
Changes:
- Updates to llama.cpp b2885
- Fixes #62 (generation can now be canceled)
- Fixes macos x64 shared libraries
API changes:
LlamaModel.Output
is nowLlamaOutput
LlamaIterator
is now public, was privateLlamaModel.Iterator
previously
Version 3.0.2
Upgrade to llama.cpp b2797
- Adds explicit support for Phi-3
- Adds flash attention
- Fixes #54