Allows to convert gguf-ed LMMs to llamafile and upload them to public HF repo You can do that from your local machine and from GitHub Actions
- Convert gguf model to llamafile and upload to huggingface using docker
- GitHub actions for convert gguf model to llamafile and upload to huggingface
- GitHub actions for convert raw model to llamafile and upload to huggingface
- Create HF repo
- Create Access Key with write permission and save it somewhere
- Create
.envfile and set there corresponding vars:
cp .env.example .env- Up container
docker compose up -d olmo- Copy gguf-ed model
docker cp OLMo-1.7-7B-hf.Q8_0.gguf llfiler-olmo-1:/app/- Connect to container shell
docker exec -it llfiler-olmo-1 /bin/bash- Convert ggufed model to llamafile:
llamafile-0.8.6/bin/llamafile-convert OLMo-1.7-7B-hf.Q8_0.gguf- Upload to HF:
huggingface-cli upload "$HF_REPO" "$HF_REPO_FILE"- Copy
.github/workflows/main.ymlworkflow to your repo - Add secret
HF_TOKENto your repo secrets - Input
HF_REPO,HF_REPO_FILE,REMOTE_GGUF_MODEL,LLAMAFILE_RELEASEon workflow start