You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
This is a C++ example running 💫 StarCoder inference using the [ggml](https://github.com/ggerganov/ggml) library.
4
4
5
-
The program runs on the CPU - no video card is required.
5
+
The program can run on the CPU - no video card is required.
6
6
7
7
The example supports the following 💫 StarCoder models:
8
8
@@ -114,5 +114,5 @@ You can also try to quantize the `ggml` models via 4-bit integer quantization.
114
114
115
115
The repo includes a proof-of-concept iOS app in the `StarCoderApp` directory. You need to provide the converted (and possibly quantized) model weights, placing a file called `bigcode_ggml_model.bin.bin` inside that folder. This is what it looks like on an iPhone:
0 commit comments