I've uploaded some old research notes I never had time to dive deeper into. I'm not sure if they are still relevant, but if anyone finds any of it interesting, I am always happy to chat about it. In particular:
- the weights initialisation may not be generating "good" gradients according to the xavier initialisation paper formulae, when used for PC networks (until page 9);
- rec-lra (https://arxiv.org/abs/2002.03911) does something that the authors don't make explicit in the paper that maybe can be mathematically formalised and generalised to be applied to PC as well in order to create more interconnected networks (that propagate the energy faster) (page 9-10);
- It could be that waiting for the network to converge during inference is actually wrong with the current formulation. This would explain a lot of the behvaiours/tricks we have experineced to make PCNs train effectively. However it is a big problem for PC since its theoretical formulation is based around the idea of state convergence via inference (page 11-12, sorry if it's a bit messy).
PCX is a Python JAX-based library designed to develop highly configurable predictive coding networks. Please refer to the tutorial notebooks in the examples folder to get started. PCX can be installed by following one of the listed three methods.
First, create an environment with Python >= 3.10 and install JAX in the correct version for your accelerator device. For cuda >= 12.0, the command is
pip install -U "jax[cuda12]"For CPU only:
pip install -U "jax[cpu]"Then you hav two options:
- Install a stable version
- Clone this repository and install the package by linking to the this folder. The installation of this libary only links to this folder and thus dynamically updates with all your changes.
On the right side of the repository, click on "releases" and download the wheel file. You can install it using
pip install path/to/wheel_file.whlAlternatively you can use the PyPi version by [work in progress...]
Clone this repository locally and then:
pip install -e /path/to/this/repo/ --config-settings editable_mode=strictTL;DR This is an alternative installation method that creates a fully configured environment to ensure your results are reproducible (no pip install, see previous section for that; no docker install, see the next section for docker install):
- Install conda.
- Install poetry.
poetry config virtualenvs.create false.- Create a conda environment with python>=3.10:
conda create -n pcax python=3.10. - Activate the environment:
conda activate pcax. cdinto the root pcax folder.poetry install --no-root.
In this way, we use poetry to make sure the environment is 100% reproducible. If you are not familiar with poetry, now is a good time to skim through the docs.
- If you need to add a Python package to the environment, use
poetry add package. Avoidpip install! - If you want to update a version of an existing package, run
poetry update package. It will update the package to the latest available version that fits the constraints. - DO NOT update the package versions in the
pyproject.tomlfile manually. Surprisingly,pyproject.tomlDOES NOT specify the versions that will be installed,poetry.lockdoes. So, first check the package version inpoetry.lock. - DO NOT update the package versions in the
poetry.lockfile manually. Usepoetry update packageinstead.poetry.lockHAS to be generated and signed automatically. - If
pyproject.tomlandpoetry.lockhave diverged for some reason (for example, you've merged another branch and resolved conflicts inpoetry.lock), usepoetry lock --no-updateto fix thepoetry.lockfile. - DO NOT commit changes to
pyproject.tomlwithout runningpoetry lock --no-updateto synchronize thepoetry.lockfile. If you commitpyproject.tomlthat is not in sync withpoetry.lockthis will break the automatic environment configuration for everyone.
Run your development environment in a docker container. This is the most straightforward option to work with pcx, as the development environment is pre-configured for you.
The Dockerfile is located in pcx/docker, with the run.sh script that builds and runs it. You can play with the Dockerfile directly if you know what you are doing or if you don't use VSCode. If you want a fully automated environment setup, then forget about the pcx/docker directory and read on.
Warning: This image should run on CUDA 12.2 or later, but not earlier. Make sure that your nvidia-smi reports CUDA >=12.2. If not, update the base nvidia/cuda image and the fix at the bottom in the docker/Dockerfile to use the same version of CUDA as your host does.
Requirements:
- A CUDA >=12.2 enabled machine with an NVIDIA GPU. You can do without a GPU, probably, just omit the steps related to the GPU passthrough and configuration.
- Install docker.
- Install nvidia-container-toolkit to enable docker to use the GPU.
- Make sure to re-start the docker daemon after the previous step. For example, on Ubuntu this will be
sudo systemctl restart docker. - Install Visual Studio Code.
- Install the Dev Containers extension in VSCode.
- Optionally, read how to develop inside container with VS Code.
Once everything is done, open this project in VS Code and execute the Dev Containers: Reopen in Container command (Ctrl/Cmd+Shift+P). This will build the docker image and open the project inside that docker image. Building the docker image for the first time may take around 15-30 minutes, depending on your internet speed.
You can always exit the container by running the Dev Containers: Reopen folder locally command.
You can rebuild the container by running the Dev Containers: Rebuild Container command.
You can check that you're running inside a container by running hostname. If it outputs meaningless 12 characters, then you are inside a container. If it outputs the name of your machine, you are not in a container.
When running a Jupyter Notebook it will prompt you to select an environment. Select Python Environments -> Python 3.10 (any of them, as they are the same).
Important notes:
- You are not supposed to modify the
docker/Dockerfileunless you perfectly know what you are doing and why. - You are not supposed to run the docker container directly. The Dev Containers extension will do this for you. If you think you need to
docker run -itthen something is really wrong. - Use
poetryto add a python package to the environment:poetry add --group dev [package]. The--group devpart should be omitted if this package is needed for the corepcxcode. Try not to install packages withpip. - Please update your docker to >>20.10.9. This image is known not to work with docker <= 20.10.9. It failes with the following message:
E: Problem executing scripts APT::Update::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true'. - Sometimes Pylance fails to start because it depends on the Python extension that starts later. In this case, just reload the window by running the
Developer: Reload windowcommand.
PyTorch with GPU support: By default, the image will install a CPU-only PyTorch. If you need GPU support with PyTorch, do the following:
- Open the project in a container using DevContainers as described above.
- Replace ALL occurrences of
source = "torch-cpu"withsource = "torch-gpu"in the pyproject.toml file. - Run
poetry lock --no-updateto re-generate thepoetry.lockfile. Note that you should do it while running inside the container. - Run
poetry install. Make sure you run it inside the container. It will take up to 20 minutes.
If you found this library to be useful in your work, then please cite: arXiv link
@article{pinchetti2024benchmarkingpredictivecodingnetworks,
title={Benchmarking Predictive Coding Networks -- Made Simple},
author={Luca Pinchetti and Chang Qi and Oleh Lokshyn and Gaspard Olivers and Cornelius Emde and Mufeng Tang and Amine M'Charrak and Simon Frieder and Bayar Menzat and Rafal Bogacz and Thomas Lukasiewicz and Tommaso Salvatori},
year={2024},
eprint={2407.01163},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.01163},
}For the code relative to the experiments performed in the above paper, please refer to the Submission of Benchmark Paper code release.
The documentation is available at: https://pcx.readthedocs.io/en/stable/
To learn how to build it yourself, go to /docs/README.md.
## Contributing
If you want to contribute to the project, please read CONTRIBUTING.md