Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Generative latent replay - a memory-efficient, privacy ensuring replay method for continual learning.

License

Notifications You must be signed in to change notification settings

iacobo/generative-latent-replay

Repository files navigation

Generative Latent Replay

arXiv PyTorch License: GPLv3 Avalanche Python uv

Generative Latent Replay diagram

Method overview

Repo for Generative Latent Replay (GLR) - a continual learning method which aleviates catastophic forgetting through strict regularisation of low level data representation and synthetic latent replay. Explicitly GLR:

  1. Freezes the backbone of a network after initial training
  2. Builds generative models of the backbone-output latent representations of each dataset encountered by the model
  3. Samples latent pseudo-examples from these generators for replay during subsequent training (to mitigate catastrophic forgetting)

Features

Generative latent replay overcomes two issues encountered in traditional replay strategies:

  1. High memory footprint:
    • replays can be sampled ad hoc
    • caches [compressed] latent representations
  2. Privacy concerns
    • data is synthetic
Continual Learning Method Replay based Low memory Privacy
Naive
Replay
Latent Replay
Generative Latent Replay

Experiments

Description

We compare generative latent replay against the above methods on the following datasets:

  • Permuted MNIST
  • Rotated MNIST
  • CoRE50

We also explore the effect of different:

  • generative models (GMM, etc)
  • network freeze depths
  • replay buffer sizes
  • replay sampling strategies

Reproducing experiments

To run experiments, first create and activate a virtual environment:

uv venv
source .venv/bin/activate

Then run the appropriate notebooks detailing the experiments.

Alternatively you can run the notebook directly in Google Colab:

Benchmark baseline

Porting method

Our implementation is fully compatible with the Avalanche continual learning library, and can be imported as a plugin in the same way as other Avalanche strategies:

from avalanche.training.plugins import StrategyPlugin
from glr.strategies import GenerativeLatentReplay

Citation

Important

If you use any of this code in your work, please reference us:

@misc{armstrong2022generative,
      title={Generative Latent Replay for Continual Learning}, 
      author={J. Armstrong and A. Thakur and D. Clifton},
      year={2022},
      howpublished = "\url{https://github.com/iacobo/generative-latent-replay/blob/main/Generative_Latent_Replay.pdf?raw}",
}

About

Generative latent replay - a memory-efficient, privacy ensuring replay method for continual learning.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published