To run ADCM, just run the following command:
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=12623 --use_env ct_train.py --outdir=ct-runs --data=datasets/cifar10-32x32.zip --cond=0 --arch=ddpmpp --metrics=fid50k_full --transfer=datasets/edm-cifar10-32x32-uncond-vp.pkl --duration=12.8 --tick=12.8 --double=250 --batch=128 --lr=0.0001 --optim=RAdam --dropout=0.3 --augment=0.0 --mode=tuning --loss_type=ADCM
Datasets and pretrained-DMs can be found at1.
Add
--lambda=xxx
to the command if you want to change the Lagrange multiplier.
We have reproduced the codes of ECM2, sCM3, iCT4, CT5, and CD5. The code for ECM is from 6. Our reproduction does not include adaptive weighting and tangent warmup in sCM, as we find they may lead to performance degradation.
If you would like to use our reproduction code, you can simply change replace the loss_type to ECM/SCM/ICT/CT/CD. For example, if you want to run sCM, run the following command:
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=12623 --use_env ct_train.py --outdir=ct-runs --data=datasets/cifar10-32x32.zip --cond=0 --arch=ddpmpp --metrics=fid50k_full --transfer=datasets/edm-cifar10-32x32-uncond-vp.pkl --duration=12.8 --tick=12.8 --double=250 --batch=128 --lr=0.0001 --optim=RAdam --dropout=0.3 --augment=0.0 --mode=tuning --loss_type=SCM
Footnotes
-
Geng Z, Pokle A, Luo W, et al. Consistency models made easy[J]. arXiv preprint arXiv:2406.14548, 2024. ↩
-
Lu C, Song Y. Simplifying, stabilizing and scaling continuous-time consistency models[J]. arXiv preprint arXiv:2410.11081, 2024. ↩
-
Song Y, Dhariwal P. Improved Techniques for Training Consistency Models[C]//The Twelfth International Conference on Learning Representations. ↩
-
Song Y, Dhariwal P, Chen M, et al. Consistency Models[C]//International Conference on Machine Learning. PMLR, 2023: 32211-32252. ↩ ↩2