A simple proxy API that fetches, parses, and caches data from ollama.com.
🔧 Documentation
🕒 Note: All responses are cached for 6 hours by default.
-
Rename the environment file
Renameexample.env
to.env
. -
Configure your environment
Open the.env
file and customize the values as needed, such as:- API URL
- Cache duration
- Enable static website mode
-
Install dependencies
Run the following command to install all required Python packages:pip install -r requirements.txt
-
Run the API
ollama_library_api.py
Or if you're getting errors:
uvicorn ollama_library_api:app --host 0.0.0.0 --port 5115 --reload
# CODE version of the app
CODE_VERSION=1.3.0_Release
# Base URL of Ollama
OLLAMA_COM_BASE_URL=https://ollama.com
# Your API URL
CURRENT_BASE_URL=https://example.com
# If running a static website
STATIC_WEBSITE=False
# Cache duration in hours
CACHE_EXPIRE_AFTER=6
- 🔥 Popular models:
/library?o=popular
- 🆕 Newest models:
/library?o=newest
- 👁️ Filter by vision capability:
/library?c=vision
- 🔥 Popular models by
jmorganca
:
/jmorganca?o=popular
- 📄 Get details for
nextai
byhtdevs
:
/htdevs/nextai
- 🔍 Search for
mistral
:
/search?q=mistral
- 📘 Details for
llama3
:
/library/llama3
- 🏷️ Tag details for
llama3:8b
:
/library/llama3:8b
- 🏷️ All tags for
llama3
:
/library/llama3/tags
- 📦 Get
model
blob forllama3:8b
:
/library/llama3:8b/blobs/model
- ⚙️ Get
params
blob forllama3:8b
:
/library/llama3:8b/blobs/params
- 🧬 Get blob by digest:
/library/llama3:8b/blobs/a3de86cd1c13
- Programmed by Blood Shot
- Maintained by Houloude9
- Hosted by Koyeb
💡 Tip: Use these endpoints to build your own Ollama-powered dashboards, tools, or integrations effortlessly.