Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Trying to install on Ubuntu only CPU #1954

@ghost

Description

Building wheels for collected packages: llama-cpp-python
678.1 Building wheel for llama-cpp-python (pyproject.toml): started
678.6 Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error'
678.6 error: subprocess-exited-with-error
678.6
678.6 × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
678.6 │ exit code: 1
678.6 ╰─> [16 lines of output]
678.6 *** scikit-build-core 0.11.4 using CMake 4.0.3 (wheel)
678.6 *** Configuring CMake...
678.6 loading initial cache file /tmp/tmp_rnn10qf/build/CMakeInit.txt
678.6 CMake Error at /tmp/pip-build-env-jv1jk7w_/normal/lib/python3.10/site-packages/cmake/data/share/cmake-4.0/Modules/CMakeDetermineCCompiler.cmake:49 (message):
678.6 Could not find compiler set in environment variable CC:
678.6
678.6 gcc-11.
678.6 Call Stack (most recent call first):
678.6 CMakeLists.txt:3 (project)
678.6
678.6
678.6 CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
678.6 CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
678.6 -- Configuring incomplete, errors occurred!
678.6
678.6 *** CMake configuration failed
678.6 [end of output]
678.6
678.6 note: This error originates from a subprocess, and is likely not a problem with pip.
678.6 ERROR: Failed building wheel for llama-cpp-python
678.6 Failed to build llama-cpp-python
678.7 ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)

failed to solve: process "/bin/sh -c cd /workspace && ./docker_build_script_ubuntu.sh" did not complete successfully: exit code: 1
ubuntu@ip-10-0-1-144:~/h2ogpt$ g++ --version

This is what my docker_build_script_ubuntu.sh looks like

#!/bin/bash
set -o pipefail
set -ex

export DEBIAN_FRONTEND=noninteractive
export PATH=/h2ogpt_conda/bin:$PATH
export HOME=/workspace
export PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu121 https://huggingface.github.io/autogptq-index/whl/cu121"
export WOLFI_OS=0

Install linux dependencies

apt-get update && apt-get install -y
git
curl
wget
software-properties-common
pandoc
vim
libmagic-dev
poppler-utils
tesseract-ocr
libtesseract-dev
libreoffice
autoconf
libtool
docker.io
nodejs
npm
zip
unzip
htop
tree
tmux
jq
net-tools
nmap
ncdu
mtr
rsync
build-essential
parallel
bc
pv
expect
cron
at
screen
inotify-tools
jq
xmlstarlet
dos2unix
ssh

Run upgrades

apt-get upgrade -y

Install conda

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh &&
mkdir -p /h2ogpt_conda &&
bash ./Miniconda3-latest-Linux-x86_64.sh -b -u -p /h2ogpt_conda &&
conda update -n base conda &&
source /h2ogpt_conda/etc/profile.d/conda.sh &&
conda create -n h2ogpt -y &&
conda activate h2ogpt &&
conda install python=3.10 pygobject weasyprint -c conda-forge -y &&
echo "h2oGPT conda env: $CONDA_DEFAULT_ENV"

if building for CPU, would remove CMAKE_ARGS and avoid GPU image as base image

Choose llama_cpp_python ARGS for your system according to llama_cpp_python backend documentation, e.g. for CUDA:

export GGML_CUDA=0

export CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS"

for Metal MAC M1/M2 comment out above two lines and uncomment out the below line

export CMAKE_ARGS="-DLLAMA_METAL=on"

export FORCE_CMAKE=1
export GPLOK=1
bash docs/linux_install.sh

chmod -R a+rwx /h2ogpt_conda

setup tiktoken cache

export TIKTOKEN_CACHE_DIR=/workspace/tiktoken_cache
python3.10 -c "
import tiktoken
from tiktoken_ext import openai_public

FakeTokenizer etc. needs tiktoken for general tasks

for enc in openai_public.ENCODING_CONSTRUCTORS:
encoding = tiktoken.get_encoding(enc)
model_encodings = [
'gpt-4',
'gpt-4-0314',
'gpt-4-32k',
'gpt-4-32k-0314',
'gpt-3.5-turbo',
'gpt-3.5-turbo-16k',
'gpt-3.5-turbo-0301',
'text-ada-001',
'ada',
'text-babbage-001',
'babbage',
'text-curie-001',
'curie',
'davinci',
'text-davinci-003',
'text-davinci-002',
'code-davinci-002',
'code-davinci-001',
'code-cushman-002',
'code-cushman-001'
]
for enc in model_encodings:
encoding = tiktoken.encoding_for_model(enc)
print('Done!')
"

Open Web UI

conda create -n open-webui -y
source /h2ogpt_conda/etc/profile.d/conda.sh
conda activate open-webui
conda install python=3.11 -y
echo "open-webui conda env: $CONDA_DEFAULT_ENV"

chmod -R a+rwx /h2ogpt_conda
pip install https://h2o-release.s3.amazonaws.com/h2ogpt/open_webui-0.3.8-py3-none-any.whl

Track build info

cp /workspace/build_info.txt /build_info.txt

mkdir -p /workspace/save
chmod -R a+rwx /workspace/save

Cleanup

                                                                                                                                        105,1         81%                         

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions