Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

This is official code implementation of the <Adapt before Continual Learning>

Notifications You must be signed in to change notification settings

byyx666/ACL_code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adapt before Continual Learning

Aojun Lu  Tao Feng  Hangjie Yuan  ChunHui Ding  Yanan Sun✉

arXiv GitHub Stars GitHub Forks

colored_mesh (1)

Abstract: Continual Learning (CL) seeks to enable neural networks to incrementally acquire new knowledge (plasticity) while retaining existing knowledge (stability). While pre-trained models (PTMs) have become pivotal in CL, prevailing approaches freeze the PTM backbone to preserve stability, limiting their plasticity, particularly when encountering significant domain gaps in incremental tasks. Conversely, sequentially finetuning the entire PTM risks catastrophic forgetting of generalizable knowledge, exposing a critical stability-plasticity trade-off. To address this challenge, we propose Adapting PTMs before the core CL process (ACL), a novel framework that refines the PTM backbone through a plug-and-play adaptation phase before learning each new task with existing CL approaches (e.g., prompt tuning). ACL enhances plasticity by aligning embeddings with their original class prototypes while distancing them from others, theoretically and empirically shown to balance stability and plasticity. Extensive experiments demonstrate that ACL significantly improves CL performance across benchmarks and integrated methods, offering a versatile solution for PTM-based CL.

🚀 Quick Start Guide

Welcome! Below are the minimal steps to get the project running.

⚠️ Before You Start

All required datasets are hosted in the open-source toolbox LAMDA-PILOT. Follow the instructions there to obtain ImageNet-R, ImageNet-A, etc.

If there are any questions, please feel free to open an issue or contact with the author: Aojun Lu ([email protected])

1️⃣ Install Dependencies

pip install -r requirements.txt

2️⃣ Run an Experiment

python main.py --warm_ --config ./exps/[MODEL].json

Replace [MODEL] with any configuration file in /exps, e.g. l2p_inr

If this repo helps your research, please give it a star!

Citation

If you find this repo useful, please consider citing our paper.

@article{lu2025adapt,
  title={Adapt before Continual Learning},
  author={Lu, Aojun and Feng, Tao and Yuan, Hangjie and Ding, Chunhui and Sun, Yanan},
  journal={arXiv preprint arXiv:2506.03956},
  year={2025}
}

Acknowledgement

Part of this work's implementation refers to LAMDA-PILOT.

About

This is official code implementation of the <Adapt before Continual Learning>

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages