Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit d6fb16e

Browse files
committed
docs: Update README
1 parent 5b258bf commit d6fb16e

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

README.md

+4-1
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ Below is a short example demonstrating how to use the high-level API to for basi
163163
)
164164
>>> output = llm(
165165
"Q: Name the planets in the solar system? A: ", # Prompt
166-
max_tokens=32, # Generate up to 32 tokens
166+
max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window
167167
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
168168
echo=True # Echo the prompt back in the output
169169
) # Generate a completion, can also call create_completion
@@ -425,6 +425,9 @@ pip install -e .[all]
425425
make clean
426426
```
427427

428+
You can also test out specific commits of `lama.cpp` by checking out the desired commit in the `vendor/llama.cpp` submodule and then running `make clean` and `pip install -e .` again. Any changes in the `llama.h` API will require
429+
changes to the `llama_cpp/llama_cpp.py` file to match the new API (additional changes may be required elsewhere).
430+
428431
## FAQ
429432

430433
### Are there pre-built binaries / binary wheels available?

0 commit comments

Comments
 (0)