Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@wr0124
Copy link
Collaborator

@wr0124 wr0124 commented Oct 8, 2025

This PR adds support for an autoregressive training mode in the consistency model by introducing the --alg_cm_autoregressive option. When enabled, the model alternates between:
- full batches of noisy inputs
- batches with one full image and the rest noisy inputs

Usage

python3 -W ignore::FutureWarning -W ignore::UserWarning train.py \
--dataroot    path/to/vid  \
--checkpoints_dir   path/to/ckpt/  \
--name  cm_vid_debug  \
--gpu_ids 0  \
--data_relative_paths   \
--model_type cm \
--data_dataset_mode  self_supervised_vid_mask_online  \
--train_batch_size 1  \
--dataaug_no_rotate \ 
--train_iter_size 1  \ 
--data_num_threads  1  \
--train_G_ema \
--train_G_lr 0.00002 \
--data_temporal_number_frames  2  \
--data_temporal_frame_step   1  \
--train_optim adamw \
--G_netG unet_vid   \
--data_online_creation_rand_mask_A  \
--output_print_freq 1   \
--output_display_freq 1  \
--data_crop_size 32  \
--data_load_size 32   \ 
--train_compute_metrics_test   \
--train_metrics_every 8000  \
--train_metrics_list PSNR LPIPS SSIM \
--with_amp \
--with_tf32 \
--output_verbose \
--data_online_creation_crop_size_A 300 \
--data_online_creation_crop_size_B  300  \
--alg_cm_autoregressive  \

consistency model with option "alg_cm_autoregressive"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant