The implementation of the Recurrent Forward Forward Network is based on the following paper. A three layer implementation of this network is benchmarked on MNIST achieving a 98%+ test accuracy with 3 layers of 2000 neurons.
This network differs from the paper in that:
- It inverts the objective function to be more biologically plausible, and to show more similarity with predictive coding.
- It hides the label for the first few timesteps, playing into the concept of predictive coding. (i.e. high activations initially, followed by low activations in case of successfully predicted samples)
- It was unclear if Hinton actually implemented the recurrent connections, as the network diagram he provided was copied from his GLOM paper. But I did implement these connections here.
Here is the architecture diagram from the original paper, which is what I have implemented:

pip install -e .
python -m RecurrentFF.benchmarks.mnist.mnist --config-file config_tutorial.toml
- Recurrent connections
- Lateral connections
- Data and label inputs conducive to changing accross timesteps
- Dynamic negative data
- Invert objective function: low activations for positive data
- Receptive fields
- Fast weights
- Peer normalization
- Non-differentiable black boxes within network?
- Support data manipulation for positive data (i.e. transforms)
- Generative circuit
- Support data reconstruction
- Support negative data synthesis
- Benchmark on MNIST
- Benchmark on Moving MNIST
- Benchmark on Seq MNIST
Please see the contributing guide for guidance.