Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit eb56ce2

Browse files
committed
docs: fix low-level api example
1 parent 0f8cad6 commit eb56ce2

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -561,15 +561,15 @@ Below is a short example demonstrating how to use the low-level API to tokenize
561561
```python
562562
>>> import llama_cpp
563563
>>> import ctypes
564-
>>> llama_cpp.llama_backend_init(numa=False) # Must be called once at the start of each program
564+
>>> llama_cpp.llama_backend_init(False) # Must be called once at the start of each program
565565
>>> params = llama_cpp.llama_context_default_params()
566566
# use bytes for char * params
567567
>>> model = llama_cpp.llama_load_model_from_file(b"./models/7b/llama-model.gguf", params)
568568
>>> ctx = llama_cpp.llama_new_context_with_model(model, params)
569569
>>> max_tokens = params.n_ctx
570570
# use ctypes arrays for array params
571571
>>> tokens = (llama_cpp.llama_token * int(max_tokens))()
572-
>>> n_tokens = llama_cpp.llama_tokenize(ctx, b"Q: Name the planets in the solar system? A: ", tokens, max_tokens, add_bos=llama_cpp.c_bool(True))
572+
>>> n_tokens = llama_cpp.llama_tokenize(ctx, b"Q: Name the planets in the solar system? A: ", tokens, max_tokens, llama_cpp.c_bool(True))
573573
>>> llama_cpp.llama_free(ctx)
574574
```
575575

0 commit comments

Comments
 (0)