Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 9c68b18

Browse files
committed
docs: Add api reference links in README
1 parent 174ef3d commit 9c68b18

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

README.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,9 @@ Detailed MacOS Metal GPU install documentation is available at [docs/install/mac
108108

109109
## High-level API
110110

111-
The high-level API provides a simple managed interface through the `Llama` class.
111+
[API Reference](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#high-level-api)
112+
113+
The high-level API provides a simple managed interface through the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
112114

113115
Below is a short example demonstrating how to use the high-level API to for basic text completion:
114116

@@ -143,6 +145,8 @@ Below is a short example demonstrating how to use the high-level API to for basi
143145
}
144146
```
145147

148+
Text completion is available through the [`__call__`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.__call__) and [`create_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_completion) methods of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
149+
146150
### Chat Completion
147151

148152
The high-level API also provides a simple interface for chat completion.
@@ -163,6 +167,8 @@ Note that `chat_format` option must be set for the particular model you are usin
163167
)
164168
```
165169

170+
Chat completion is available through the [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion) method of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
171+
166172
### Function Calling
167173

168174
The high-level API also provides a simple interface for function calling.
@@ -296,6 +302,12 @@ python3 -m llama_cpp.server --model models/7B/llama-model.gguf --chat_format cha
296302
That will format the prompt according to how model expects it. You can find the prompt format in the model card.
297303
For possible options, see [llama_cpp/llama_chat_format.py](llama_cpp/llama_chat_format.py) and look for lines starting with "@register_chat_format".
298304

305+
### Web Server Examples
306+
307+
- [Local Copilot replacement](https://llama-cpp-python.readthedocs.io/en/latest/server/#code-completion)
308+
- [Function Calling support](https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling)
309+
- [Vision API support](https://llama-cpp-python.readthedocs.io/en/latest/server/#multimodal-models)
310+
299311
## Docker image
300312

301313
A Docker image is available on [GHCR](https://ghcr.io/abetlen/llama-cpp-python). To run the server:
@@ -307,6 +319,8 @@ docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/lla
307319

308320
## Low-level API
309321

322+
[API Reference](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#low-level-api)
323+
310324
The low-level API is a direct [`ctypes`](https://docs.python.org/3/library/ctypes.html) binding to the C API provided by `llama.cpp`.
311325
The entire low-level API can be found in [llama_cpp/llama_cpp.py](https://github.com/abetlen/llama-cpp-python/blob/master/llama_cpp/llama_cpp.py) and directly mirrors the C API in [llama.h](https://github.com/ggerganov/llama.cpp/blob/master/llama.h).
312326

0 commit comments

Comments
 (0)