Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 4f87b23

Browse files
committed
docs: add Vulkan build command
1 parent e71ddce commit 4f87b23

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

README.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Inference of Stable Diffusion and Flux in pure C/C++
2121
- Accelerated memory-efficient CPU inference
2222
- Only requires ~2.3GB when using txt2img with fp16 precision to generate a 512x512 image, enabling Flash Attention just requires ~1.8GB.
2323
- AVX, AVX2 and AVX512 support for x86 architectures
24-
- Full CUDA, Metal and SYCL backend for GPU acceleration.
24+
- Full CUDA, Metal, Vulkan and SYCL backend for GPU acceleration.
2525
- Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs models
2626
- No need to convert to `.ggml` or `.gguf` anymore!
2727
- Flash Attention for memory usage optimization (only cpu for now)
@@ -142,6 +142,15 @@ cmake .. -DSD_METAL=ON
142142
cmake --build . --config Release
143143
```
144144
145+
##### Using Vulkan
146+
147+
Install Vulkan SDK from https://www.lunarg.com/vulkan-sdk/.
148+
149+
```
150+
cmake .. -DSD_VULKAN=ON
151+
cmake --build . --config Release
152+
```
153+
145154
##### Using SYCL
146155
147156
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux).

0 commit comments

Comments
 (0)