Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 3660230

Browse files
committed
Fix docs multi-modal docs
1 parent aab74f0 commit 3660230

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/server.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,10 +44,10 @@ You'll first need to download one of the available multi-modal models in GGUF fo
4444
- [llava1.5 7b](https://huggingface.co/mys/ggml_llava-v1.5-7b)
4545
- [llava1.5 13b](https://huggingface.co/mys/ggml_llava-v1.5-13b)
4646

47-
Then when you run the server you'll need to also specify the path to the clip model used for image embedding
47+
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
4848

4949
```bash
50-
python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path>
50+
python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
5151
```
5252

5353
Then you can just use the OpenAI API as normal

0 commit comments

Comments
 (0)