Kowalski is an API-driven multi-survey data archive and alert broker. Its main focus is the Zwicky Transient Facility.
A schematic overview of the functional aspects of Kowalski and how they interact is shown below:
- A non-relational (NoSQL) database
MongoDBpowers the data archive, the alert stream sink, and the alert handling service. - An API layer provides an interface for the interaction with the backend:
it is built using a
pythonasynchronous web framework,aiohttp, and the standardpythonasync event loop serves as a simple, fast, and robust job queue. Multiple instances of the API service are maintained using theGunicornWSGI HTTP Server. - A programmatic
pythonclient is also available to interact with Kowalski's API. - Incoming and outgoing traffic can be routed through
traefik, which acts as a simple and performant reverse proxy/load balancer. - An alert brokering layer listens to
Kafkaalert streams and uses adask.distributedcluster for distributed alert packet processing, which includes data preprocessing, execution of machine learning models, catalog cross-matching, and ingestion intoMongoDB. It also executes user-defined filters based on the augmented alert data and posts the filtering results to aSkyPortalinstance. - Kowalski is containerized using
Dockersoftware and orchestrated withdocker-composeallowing for simple and efficient deployment in the cloud and/or on-premise.
Kowalski is an API-first system. The full OpenAPI specs can be found here. Most users will only need the queries section of the specs.
The easiest way to interact with a Kowalski instance is by using a python client penquins.
Start off by cloning the repo, then cd into the cloned directory:
git clone https://github.com/dmitryduev/kowalski.git
cd kowalskiMake sure you have a python environment that meets the requirements to run Kowalski:
pip install -r requirements.txtYou can then use the kowalski.py utility to manage Kowalski.
You need config files in order to run Kowalski. You can start off by copying the default config/secrets over:
cp config.defaults.yaml config.yaml
cp docker-compose.defaults.yaml docker-compose.yamlconfig.yaml contains the API and ingester configs, the supevisord config for the API and ingester containers,
together with all the secrets, so be careful when committing code / pushing docker images.
However, if you want to run in a production setting, be sure to modify config.yaml and choose strong passwords!
docker-compose.yaml serves as a config file for docker-compose, and can be used for different Kowalski deployment modes.
Kowalski comes with several template docker-compose configs (see below for more info).
Finally, once you've set the config files, you can build an instance of Kowalski. You can do this with the following command:
./kowalski.py up --buildYou have now successfully built a Kowalski instance!
Any time you want to rebuild kowalski, you need to re-run this command.
If you want to just interact with a Kowalski instance that has already been built, you can drop the --build flag:
./kowalski.py upto start up a pre-built Kowalski instance./koiwalski.py downto shut down a pre-built Kowalski instance
You can check that a running Kowalski instance is working by using the Kowalski test suite:
./kowalski.py testKowalski uses docker-compose under the hood and requires a docker-compose.yaml file.
There are several available deployment scenarios:
- Bare-bones
- Bare-bones + broker for
SkyPortal/Fritz - Behind
traefik
Use docker-compose.defaults.yaml as a template for docker-compose.yaml.
Note that the environment variables for the mongo service must match
admin_* under kowalski.database in config.yaml.
Use docker-compose.fritz.defaults.yaml as a template for docker-compose.yaml.
If you want the alert ingester to post (filtered) alerts to SkyPortal, make sure
{"misc": {"broker": true}} in config.yaml.
Use docker-compose.traefik.defaults.yaml as a template for docker-compose.yaml.
If you have a publicly accessible host allowing connections on port 443 and a DNS record with the domain
you want to expose pointing to this host, you can deploy kowalski behind traefik,
which will act as the edge router -- it can do many things including load-balancing and
getting a TLS certificate from letsencrypt.
In docker-compose.yaml:
- Replace
[email protected]with your email. - Replace
private.caltech.eduwith your domain.
./kowalski.py downOpenAPI specs are to be found under /docs/api once Kowalski is up and running.
Contributions to Kowalski are made through GitHub Pull Requests, a set of proposed commits (or patches).
To prepare, you should:
-
Create your own fork the kowalski repository by clicking the "fork" button.
-
Clone (download) your copy of the repository, and set up a remote called
upstreamthat points to the main Kowalski repository.git clone [email protected]:<yourname>/kowalski git remote add upstream [email protected]:dmitryduev/kowalski
Then, for each feature you wish to contribute, create a pull request:
-
Download the latest version of Kowalski, and create a new branch for your work.
Here, let's say we want to contribute some documentation fixes; we'll call our branch
rewrite-contributor-guide.git checkout master git pull upstream master git checkout -b rewrite-contributor-guide
-
Make modifications to Kowalski and commit your changes using
git addandgit commit. Each commit message should consist of a summary line and a longer description, e.g.:Rewrite the contributor guide While reading through the contributor guide, I noticed several places in which instructions were out of order. I therefore reorganized all sections to follow logically, and fixed several grammar mistakes along the way. -
When ready, push your branch to GitHub:
git push origin rewrite-contributor-guide
Once the branch is uploaded, GitHub should print a URL for turning your branch into a pull request. Open that URL in your browser, write an informative title and description for your pull request, and submit it. There, you can also request a review from a team member and link your PR with an existing issue.
-
The team will now review your contribution, and suggest changes. To simplify review, please limit pull requests to one logical set of changes. To incorporate changes recommended by the reviewers, commit edits to your branch, and push to the branch again (there is no need to re-create the pull request, it will automatically track modifications to your branch).
-
Sometimes, while you were working on your feature, the
masterbranch is updated with new commits, potentially resulting in conflicts with your feature branch. To fix this, please merge in the latestupstream/masterbranch:git merge rewrite-contributor-guide upstream/master
Developers may merge master into their branch as many times as they want to.
- Once the pull request has been reviewed and approved by at least two team members, it will be merged into Kowalski.
Install our pre-commit hook as follows:
pip install pre-commit
pre-commit install
This will check your changes before each commit to ensure that they
conform with our code style standards. We use black to reformat Python
code and flake8 to verify that code complies with PEP8.
When developing, it can be useful to just run kowalski directly.
To install the API requirements, run:
pip install -r kowalski/requirements_api.txtJust as described above, the config file must be created:
cp config.defaults.yaml config.yamlWhen running locally, it is likely that database.host should be 127.0.0.1 or similar. For simplicity, we also set database.replica_set to null.
We need to set the admin and user roles for the database. To do so, login to mongdb and set (using the default values from the config):
mongosh --host 127.0.0.1 --port 27017and then from within the mongo terminal
use kowalski
db.createUser( { user: "mongoadmin", pwd: "mongoadminsecret", roles: [ { role: "userAdmin", db: "admin" } ] } )
db.createUser( { user: "ztf", pwd: "ztf", roles: [ { role: "readWrite", db: "admin" } ] } )
db.createUser( { user: "mongoadmin", pwd: "mongoadminsecret", roles: [ { role: "userAdmin", db: "kowalski" } ] } )
db.createUser( { user: "ztf", pwd: "ztf", roles: [ { role: "readWrite", db: "kowalski" } ] } )The API app can then be run with
KOWALSKI_APP_PATH=./ KOWALSKI_PATH=kowalski python kowalski/api.pyThen tests can be run by going into the kowalski/ directory
cd kowalskiand running:
KOWALSKI_APP_PATH=../ python -m pytest -s api.py ../tests/test_api.pywhich should complete.
To install the broker requirements, run:
pip install -r kowalski/requirements_ingester.txtThe ingester requires kafka, which can be installed with:
export kafka_version=2.13-2.5.0
wget https://storage.googleapis.com/ztf-fritz/kafka_$kafka_version.tgz
tar -xzf kafka_$kafka_version.tgzInstalled in this way, path.kafka in the config should be set to ./kafka_2.13-2.5.0.
The broker can then be run with
KOWALSKI_APP_PATH=./ python kowalski/alert_broker_ztf.pyThen tests can be run by going into the kowalski/ directory
cd kowalskiand running:
KOWALSKI_APP_PATH=../ KOWALSKI_DATA_PATH=../data python -m pytest -s alert_broker_ztf.py ../tests/test_alert_broker_ztf.pyWe also provide an option USE_TENSORFLOW=False for users who cannot install Tensorflow for whatever reason.
To test the ingester, path.logs in the config should be set to ./data/logs/.
Then tests can be run by going into the kowalski/ directory
cd kowalskiand running:
KOWALSKI_APP_PATH=../ KOWALSKI_DATA_PATH=../data python -m pytest ../tests/test_ingester.pyTo install the tools requirements, run:
pip install -r kowalski/requirements_tools.txtThen tests can be run by going into the kowalski/ directory
cd kowalskiand running:
KOWALSKI_APP_PATH=../ KOWALSKI_DATA_PATH=../data python -m pytest -s ../tools/istarmap.py ../tests/test_tools.py