Thanks to visit codestin.com
Credit goes to github.com

Skip to content

itsharex/java-llama.cpp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama.cpp b1112

Java Bindings for llama.cpp

The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook. This repository provides Java bindings for the C++ library.

You are welcome to contribute

Quick Start

Access this library via Maven:

<dependency>
    <groupId>de.kherud</groupId>
    <artifactId>llama</artifactId>
    <version>1.0.0</version>
</dependency>

You can then use this library. This is a short example:

public class Example {

    public static void main(String... args) throws IOException {
        Parameters params = new Parameters.Builder()
                .setNGpuLayers(43)
                .setTemperature(0.7f)
                .setPenalizeNl(true)
                .setMirostat(Parameters.MiroStat.V2)
                .setAntiPrompt(new String[]{"\n"})
                .build();

        String modelPath = "/path/to/gguf-model-q4_0.bin";
        String system = "This is a conversation between User and Llama, a friendly chatbot.\n" +
                "Llama is helpful, kind, honest, good at writing, and never fails to answer any " +
                "requests immediately and with precision.\n";
        BufferedReader reader = new BufferedReader(new InputStreamReader(System.in, StandardCharsets.UTF_8));
        try (LlamaModel model = new LlamaModel(modelPath, params)) {
            String prompt = system;
            while (true) {
                prompt += "\nUser: ";
                System.out.print(prompt);
                String input = reader.readLine();
                prompt += input;
                System.out.print("Llama: ");
                prompt += "\nLlama: ";
                for (LlamaModel.Output output : model.generate(prompt)) {
                    System.out.print(output);
                }
                prompt = "";
            }
        }
    }
}

Also have a look at the examples.

Configuration

You can configure every option the library offers. Note however that most options aren't relevant to this Java binding yet (in particular everything that concerns command line interfacing).

Parameters params = new Parameters.Builder()
                            .setInputPrefix("...")
                            .setLoraAdapter("/path/to/lora/adapter")
                            .setLoraBase("/path/to/lora/base")
                            .build();
LlamaModel model = new LlamaModel("/path/to/model.bin", params);

Installing the llama.cpp library

Make sure the the llama.cpp shared library is appropriately installed for your platform:

  • libllama.so (linux)
  • libllama.dylib (macos)
  • llama.dll (windows)

Refer to the official readme for details. The library can be built with the llama.cpp project:

mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=ON  # add any other arguments for your backend
cmake --build . --config Release

Look for the shared library in build.

Important

If you are running MacOS with Metal, you have to put the file ggml-metal.metal from build/bin in the same directory as the shared library.

Depending on your platform, either:

  • Move the file then to the correct directory, e.g., /usr/local/lib for most linux distributions. If you're not sure where to put it, just run the code. Java will throw an error explaining where it looks.
  • Set the JVM option -Djna.library.path="/path/to/library/" (IDEs like IntelliJ make this easy)

About

Java Bindings for llama.cpp - A Port of Facebook's LLaMA model in C/C++

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 98.5%
  • Shell 1.5%