Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: kherud/java-llama.cpp

Version 4.1.0

18 Mar 20:45
Compare
Choose a tag to compare

Big credit to @vaiju1981 for new features:

  • Now with support for Gemma 3
  • Update to llama.cpp b4916
  • Re-ranking and chat-template support!

Version 4.0.0

09 Mar 15:15
Compare
Choose a tag to compare

This major version updates from b3534 to the newest available llama.cpp version b4831.

  • Huge credit to @vaiju1981 for enabling this update
  • Credit to @glebashnik for exposing a function to convert json schemas to grammars

Version 3.4.1

06 Sep 20:28
Compare
Choose a tag to compare

This version is a minor fix for problems with the pre-built shared libraries on Linux x86_64.

Version 3.4.0

06 Sep 18:25
Compare
Choose a tag to compare

Version 3.4.0

Credit goes to @shuttie for adding CUDA support on Linux x86_64 with this version.

Version 3.3.0

07 Aug 19:07
Compare
Choose a tag to compare

Upgrade to latest llama.cpp version b3534

Version 3.2.1

27 May 18:18
Compare
Choose a tag to compare
  • Include GGML backend in text log
  • Update to llama.cpp b3008

Version 3.2.0

25 May 10:00
Compare
Choose a tag to compare

Logging Re-Implementation (see #66)

  • Re-adds logging callbacks via LlamaModel#setLogger(LogFormat, BiConsumer<LogLevel, String>)
  • Removes dis-functional ModelParameters#setLogDirectory(String), ModelParameters#setDisableLog(boolean), andModelParameters#setLogFormat(LogFormat)

Version 3.1.1

22 May 20:36
Compare
Choose a tag to compare
  • Adds chat template support (credit to @lesters #64)
  • Updates to llama.cpp b2969
  • Adds explicit Phi-3 128k support

Version 3.1.0

15 May 19:29
Compare
Choose a tag to compare

Changes:

  • Updates to llama.cpp b2885
  • Fixes #62 (generation can now be canceled)
  • Fixes macos x64 shared libraries

API changes:

  • LlamaModel.Output is now LlamaOutput
  • LlamaIterator is now public, was private LlamaModel.Iterator previously

Version 3.0.2

06 May 19:59
Compare
Choose a tag to compare

Upgrade to llama.cpp b2797

  • Adds explicit support for Phi-3
  • Adds flash attention
  • Fixes #54