Uses LangChain, LangServe, and GPT-4 to chat with the ACMI Public API collection.
- Build the virtual environment:
make build-local - Setup your OpenAI API Key:
cp config.tmpl.env config.env - Install direnv if you'd like to load API Keys from the
config.envfile:brew install direnv - Load the environment
direnv allow - Start chatting on the command line:
make up-local - Start chatting in a web browser:
make server-localand visit: http://localhost:8000/playground - See the API server documentation: http://localhost:8000/docs
Or if you have your own Python environment setup:
- Install the dependencies:
pip install -r requirements.txt - Start chatting on the command line:
python chat.py - Start chatting in a web browser:
python api/server.pyand visit: http://localhost:8000/playground - See the API server documentation: http://localhost:8000/docs
Or to run the API server using Docker:
- Build:
make build - Run:
make up - Visit: http://localhost:8000
- Clean-up:
make down
Optional environment variables you can set:
DATABASE_PATH- set where your Chromadb vector database is locatedCOLLECTION_NAME- the name of the Chromadb collection to save your data toPERSIST_DIRECTORY- the name of the directory to save your persistant Chromadb dataMODEL- the large language model to use (defaults to OpenAIgpt-4o)EMBEDDINGS_MODEL- the embeddings model to use (defauls to OpenAItext-embedding-ada-002)REBUILD- set totrueto rebuild your Chromadb vector databaseALL- set totrueto rebuild with the entire ACMI Public API collection
When using Docker:
CACHE_URL- the URL to your pre-generated embeddings database e.g. https://example.com/path/
The scripts/entrypoint.sh script will look for a file named works_db_chat.tar.gz at the CACHE_URL to process.
To create that file, generate your embeddings database locally and then run: tar -cvzf works_db_chat.tar.gz works_db
To use an open source model on your computer with Ollama:
- Install
ollamawith:brew install ollama - Start the
ollamaserver with:OLLAMA_HOST=0.0.0.0:11434 ollama serve - Pull an open source model:
ollama pull llama3 - Set
MODEL=llama3 - Set
LLM_BASE_URL=http://<YOUR_IP_ADDRESS>:11434 - Start the chat:
make up
By default we only build the first ten pages of the Public API into the vector database, but if you'd like the build the entire collection:
- Add
ALL=trueto yourconfig.env - Then either delete your persistant directory or also add
REBUILD=trueto yourconfig.env - Rebuild the app:
make build-local
Chat with the first page of the ACMI Public API: Google Colab.
python chat.py
=========================
ACMI collection chat v0.1
=========================
Question: What was the collection item that has gnomes in it?
Answer: The collection item that has gnomes in it is titled "Toy magic lantern slide (Gnomes in umbrellas on water)". It is a work from Germany, circa 1900, and was last on display at ACMI: Gallery 1 on June 23, 2023. The item is categorized under the curatorial section "The Story of the Moving Image → Moving Pictures → MI-02. Play and Illusion → MI-02-C01 → Panel C8" and has measurements of 3.5 x 14.3cm. It is a 2D Object, specifically a Glass slide/Pictorial.
Sources: ['https://api.acmi.net.au/works/119591']