Thanks to visit codestin.com Credit goes to github.com
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent aab74f0 commit 3660230Copy full SHA for 3660230
docs/server.md
@@ -44,10 +44,10 @@ You'll first need to download one of the available multi-modal models in GGUF fo
44
- [llava1.5 7b](https://huggingface.co/mys/ggml_llava-v1.5-7b)
45
- [llava1.5 13b](https://huggingface.co/mys/ggml_llava-v1.5-13b)
46
47
-Then when you run the server you'll need to also specify the path to the clip model used for image embedding
+Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
48
49
```bash
50
-python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path>
+python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
51
```
52
53
Then you can just use the OpenAI API as normal
0 commit comments