Local inference engine
aknowledgements: ggml-org/llama.cpp
- set
BUILD_SHARED_LIBStoFALSE
- set
GGML_CPUtoFALSE - set
CMAKE_OSX_ARCHITECTUREStox86_64
set
LLAMA_CURL to FALSEc.f. ggml-org/llama.cpp#9937
cmake -S . -B build -A x64 ^
-DBUILD_SHARED_LIBS=FALSE ^
-DCMAKE_TOOLCHAIN_FILE={...\vcpkg\scripts\buildsystems\vcpkg.cmake} ^
-DLLAMA_BUILD_SERVER=ON
cmake --build build --config Release
- open project sith visual studio
- add curl include paths
- add libraries
Crypt32.lib
Secur32.lib
Iphlpapi.lib
libcurl.lib
zlib.lib
ws2_32.lib
- build each target with
MT