Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 52d9d70

Browse files
audipabetlen
andauthored
docs: Update README.md to fix pip install llama cpp server (abetlen#1187)
Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it ```bash $ pip install llama-cpp-python[server] zsh: no matches found: llama-cpp-python[server] ``` Co-authored-by: Andrei <[email protected]>
1 parent 251a8a2 commit 52d9d70

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -505,14 +505,14 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
505505
To install the server package and get started:
506506

507507
```bash
508-
pip install llama-cpp-python[server]
508+
pip install 'llama-cpp-python[server]'
509509
python3 -m llama_cpp.server --model models/7B/llama-model.gguf
510510
```
511511

512512
Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this:
513513

514514
```bash
515-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python[server]
515+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
516516
python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35
517517
```
518518

0 commit comments

Comments
 (0)