Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ GUARD Public

Official PyTorch implementation of the paper "Towards Adversarially Robust Dataset Distillation by Curvature Regularization" (AAAI 2025).

Notifications You must be signed in to change notification settings

yumozi/GUARD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Adversarially Robust Dataset Distillation by Curvature Regularization (AAAI 2025)

Official implementation of paper:

"Towards Adversarially Robust Dataset Distillation by Curvature Regularization"
Eric Xue, Yijiang Li, Haoyang Liu, Peiran Wang, Yifan Shen, Haohan Wang
[Paper] [Code][Website]

Abstract

Dataset distillation (DD) allows datasets to be distilled to fractions of their original size while preserving the rich distributional information so that models trained on the distilled datasets can achieve a comparable accuracy while saving significant computational loads. Recent research in this area has been focusing on improving the accuracy of models trained on distilled datasets. In this paper, we aim to explore a new perspective of DD. We study how to embed adversarial robustness in distilled datasets, so that models trained on these datasets maintain the high accuracy and meanwhile acquire better adversarial robustness. We propose a new method that achieves this goal by incorporating curvature regularization into the distillation process with much less computational overhead than standard adversarial training. Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.

Run all

Before running the code, you need to modify the Pytorch source code according to this document: train/README.md.

  • -p: whether to train a new teacher model
  • -C: whether to use GUARD
  • -b: the batchnorm statistics regularization coefficient
bash run.sh -x 1 -y 1 -d imagenette -r /home/user/data/ -u 0 -b 10.0 -p -C -h 3.0 -l 100 >> output.log 2>&1 &

Citation

@inproceedings{xue2025towards,
	author = {Eric Xue and Yijiang Li and Haoyang Liu and Peiran Wang and Yifan Shen and Haohan Wang},
	title = {Towards Adversarially Robust Dataset Distillation by Curvature Regularization},
	booktitle = {Proceedings of the Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI-25)},
	year = {2025},
}

Acknowledgement

Our implementation is based on the code of SRe2L.

About

Official PyTorch implementation of the paper "Towards Adversarially Robust Dataset Distillation by Curvature Regularization" (AAAI 2025).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •