Forked from https://github.com/narrowfail/django-channels-chat to play with the OpenAI GPT api.
It's a small person-to-person application built using Django where some users can be bots. Bots can be defined in yaml as a pipeline of simple python steps.
demo.mp4
To keep local development simple we use docker and docker-compose. Docker is a prerequiste.
For the bots to work, you'll also need an openai api token.
Get started by running ./setup which will;
- set your .env vars
- build your containers
- run you migrations
- and setup your django root user.
You also get the option to setup a docker-compose.override.yml file for your local environment. The docker-compose.override.yml.example is what I use for local mac development.
It maps the local files into the containers so you can do some programming. It will hang the containers so you can exec in and run commands.
Run docker-compose up -d to run the containers - or just ./go
Exec in like docker-compose exec app /bin/bash
Once in the app container run npm run server
To run the bots, exec in like docker-compose exec bot /bin/bash
Once in, run npm run bots
See Using docker-compose shortcuts below
So far I've
- containerised it using docker-compose
- added deployment scripts for digital ocean
- got letencrypt working
- got websockets working behind nginx (near killed me)
- switched to redis & postgresql
- upgraded to bootstrap 5 and given it a responsive face-lift
- dark mode! It's very dark
- used stimulusjs & parcel to organise my javascript
- switched to rendering server-side rather than API + JS
- bookmarkable RESTFUL urls
- created a bot server with configurable pipelines
- made a kids gratitude journal as a test pipeline
- broken all the tests and not fixed them!!!
It's quiet a bit more complex than narrowfail's beautifully simple app.
- Bots are defined in app/bot_config and are loaded by the bot_manager
- Bots consist of a series of steps that are mediated by redis
- The bot_manager registers the bots to the django chat app
- When the user posts a message, the chat app posts to the bot managers' api
- When the bot replies, it posts to the chat apps' api
- The chat app relays replies to the clients browser over websockets
Because of time constraints this project lacks of:
- User Sign-In / Forgot Password
- User Selector Pagination
- Good Test Coverage
- Better Comments / Documentation Strings
- Frontend Tests
- failed to upgrade to django 4 - arrgghhh!!
- DONE Modern Frontend Framework (like React) - used stimulusjs ;)
- DONE Frontend Package (automatic lintin, building and minification) - used parcel
- DONE Proper UX / UI design (looks plain bootstrap) - pretty bootstrap
This script mounts your pgdata and redis data on an external docker volume, so if you rebuild or remove your database or redis containers you don't loose all your data and don't have to reinstall all your packages.
In the docker-compose.override.example I suggest persisting the data to local folders for inspection during local development. I also persist the user volume for auto complete when inside your container
DJANGO_SUPERUSER_TOKEN=xxx # run ./manage.py createsuperusertoken The bot server uses DJANGO_SUPERUSER_TOKEN to register bots that are configured via yaml in app/bot_config.
See bot_config/diaryfile.yaml and bot_config/japanese_bot.yaml for examples of how to make a bot.
Each step in these yaml files is a python class in the same directory.
You can make your own custom step class with a process method, that takes a payload parameter and returns the payload. If you include a payload['reply'] that will get posted to the user.
The pipeline approach is designed to make it simple to contribute reusable steps.
To deploy to digital ocean you'll need doctl installed as a prerequisite.
On mac, brew install doctl
Then you'll need to config it with your digital ocean token.
doctl auth init
In the deploy directory there are a number of scripts to help you get into production. You run them from the parent directory.
deploy/gowill do everythingdeploy/01-build-serverwill make a digital ocean server and set.digital_ocean_envso your other scripts will workdeploy/02-config-serverthis does everything to pave the road for deployment such as installing software and creating a non root userdeploy/03-deploy-repothis will clone the repo in and copy up the env files mentioned below, then fire up docker composedeploy/userlogina shortcut for logging into the serverdeploy/rootloginI should probably kill thisdeploy/backuptake a copy of the production databasedeploy/restorerestores the latest backupdeploy/cleanupdestroys the digital ocean server
If you are using the digital ocean deploy scripts in /deploy, there are two files you'll need;
.env-prodfor production configuration (same as .env unless you have different prod config).docker-compose-override.yml.prodwhich open the app port
typing docker-compose all the time can be tedious so add this to your ~/.bashrc or ~/.bash_profile
Then
docker-compose stop && docker-compose up -d && docker-compose log -tf
becomes just
dcs && dcu -d && dcl -tf
but i have a shortcut for that too
dcrestart
The first alias bp makes editing and reloading your bash_profile easy.
The second command get-python makes getting this repo easy.
# shortcut for editing your bash profile and these shortcuts
alias bp='vim ~/.bash_profile && . ~/.bash_profile'
# Get this repo!
get-python() { git clone [email protected]:lukerohde/docker-python-template.git . ; rm -rf .git ; }
# docker shortcuts
alias ds='docker stats'
alias dc='docker-compose'
alias dce='docker-compose exec'
alias dcu='docker-compose up'
alias dcd='docker-compose down'
alias dcr='docker-compose run'
alias dcs='docker-compose stop'
alias dcb='docker-compose build'
alias dcps='docker-compose ps'
alias dcl='docker-compose logs'
alias dclf='docker-compose logs -f --tail=1000'
alias dckill='docker-compose kill'
alias dcrestart='docker-compose stop && docker-compose up -d && docker-compose logs -ft'
alias dps='docker ps'
alias dk='docker kill'
alias dkall='docker kill $(docker ps -q)'
alias drestart="osascript -e 'quit app \"Docker\"' && open -a Docker"
alias dstop='docker stop $(docker ps -aq)'
alias dprune='docker system prune -a'
dceb() { docker-compose exec $1 /bin/bash ; }
dcub() { docker-compose up -d $1 && docker-compose exec $1 /bin/bash ; }
dcudb() { docker-compose up -d db && docker-compose exec db psql -U postgres $1 ; }
ddeleteall() {
docker stop $(docker ps -aq)
docker system prune -a
}
Use the override file to mount your app volume with :delegated
cp docker-compose.override.yml.example docker-compose.override.yml
app:
volumes:
- ./app:/app:delegated
The provided docker-compose.override.yaml.example file will not actually run your app. Instead it runs docker-start.override that hangs the container to leave it running so you can shell in and run your application yourself. This makes debugging easy.
The shortcut for running your app then shelling in is
dcub app
Once shelled in you can run npm run server etc...