@@ -52,45 +52,41 @@ Otherwise, while installing it will build the llama.ccp x86 version which will b
52
52
To install with OpenBLAS, set the ` LLAMA_BLAS and LLAMA_BLAS_VENDOR ` environment variables before installing:
53
53
54
54
``` bash
55
- CMAKE_ARGS=" -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
55
+ CMAKE_ARGS=" -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
56
56
```
57
57
58
58
To install with cuBLAS, set the ` LLAMA_CUBLAS=1 ` environment variable before installing:
59
59
60
60
``` bash
61
- CMAKE_ARGS=" -DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
61
+ CMAKE_ARGS=" -DLLAMA_CUBLAS=on" pip install llama-cpp-python
62
62
```
63
63
64
64
To install with CLBlast, set the ` LLAMA_CLBLAST=1 ` environment variable before installing:
65
65
66
66
``` bash
67
- CMAKE_ARGS=" -DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install llama-cpp-python
67
+ CMAKE_ARGS=" -DLLAMA_CLBLAST=on" pip install llama-cpp-python
68
68
```
69
69
70
70
To install with Metal (MPS), set the ` LLAMA_METAL=on ` environment variable before installing:
71
71
72
72
``` bash
73
- CMAKE_ARGS=" -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
73
+ CMAKE_ARGS=" -DLLAMA_METAL=on" pip install llama-cpp-python
74
74
```
75
75
76
76
To install with hipBLAS / ROCm support for AMD cards, set the ` LLAMA_HIPBLAS=on ` environment variable before installing:
77
77
78
78
``` bash
79
- CMAKE_ARGS=" -DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
79
+ CMAKE_ARGS=" -DLLAMA_HIPBLAS=on" pip install llama-cpp-python
80
80
```
81
81
82
82
#### Windows remarks
83
83
84
- To set the variables ` CMAKE_ARGS ` and ` FORCE_CMAKE ` in PowerShell, follow the next steps (Example using, OpenBLAS):
84
+ To set the variables ` CMAKE_ARGS ` in PowerShell, follow the next steps (Example using, OpenBLAS):
85
85
86
86
``` ps
87
87
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
88
88
```
89
89
90
- ``` ps
91
- $env:FORCE_CMAKE = 1
92
- ```
93
-
94
90
Then, call ` pip ` after setting the variables:
95
91
```
96
92
pip install llama-cpp-python
0 commit comments