Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 6b3aa7f

Browse files
committed
Bump version
1 parent 3fbcded commit 6b3aa7f

File tree

2 files changed

+18
-1
lines changed

2 files changed

+18
-1
lines changed

CHANGELOG.md

+17
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,23 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.12]
11+
12+
- Update llama.cpp to ggerganov/llama.cpp@50337961a678fce4081554b24e56e86b67660163
13+
- Fix missing `n_seq_id` in `llama_batch` by @NickAlgra in #842
14+
- Fix exception raised in `__del__` when freeing models by @cebtenzzre in #848
15+
- Performance improvement for logit bias by @zolastro in #851
16+
- Fix suffix check arbitrary code execution bug by @mtasic85 in #854
17+
- Fix typo in `function_call` parameter in `llama_types.py` by @akatora28 in #849
18+
- Fix streaming not returning `finish_reason` by @gmcgoldr in #798
19+
- Fix `n_gpu_layers` check to allow values less than 1 for server by @hxy9243 in #826
20+
- Supppress stdout and stderr when freeing model by @paschembri in #803
21+
- Fix `llama2` chat format by @delock in #808
22+
- Add validation for tensor_split size by @eric1932 #820
23+
- Print stack trace on server error by @abetlen in d6a130a052db3a50975a719088a9226abfebb266
24+
- Update docs for gguf by @johnccshen in #783
25+
- Add `chatml` chat format by @abetlen in 305482bd4156c70802fc054044119054806f4126
26+
1027
## [0.2.11]
1128

1229
- Fix bug in `llama_model_params` object has no attribute `logits_all` by @abetlen in d696251fbe40015e8616ea7a7d7ad5257fd1b896

llama_cpp/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.11"
4+
__version__ = "0.2.12"

0 commit comments

Comments
 (0)