I built a pre-compiled manylinux wheel for llama_cpp_python that includes all necessary native shared libraries (e.g., libllama.so, libggml-cpu.so, etc.), so users can install it without needing to build the project from source.
This is intended as a community contribution to help developers who struggle with the native build process.
π₯ Download the wheel
You can download the wheel from my GitHub release here:
π https://github.com/mrzeeshanahmed/llama-cpp-python/releases/tag/v0.3.17-manylinux-x86_64
π Supported Environment
β Linux x86_64
β Python 3.10
β CPU only (OpenBLAS + OpenMP backend)
This Moth#r F@cker took 8 hours of my life and taught me a lot of things I did not know. Please show some form of appreciation