Wolf is an open source library for Invertible Generative (Normalizing) Flows.
This is the code we used in the following papers
Decoupling Global and Local Representations from/for Image Generation
Xuezhe Ma, Xiang Kong, Shanghang Zhang and Eduard Hovy
Xuezhe Ma, Xiang Kong, Shanghang Zhang and Eduard Hovy
NeurIPS 2019
- Python >= 3.6
- Pytorch >= 1.3.1
- apex
- lmdb >= 0.94
- overrides
- Install NVIDIA-apex.
- Install Pytorch and torchvision
First go to the experiments directory:
cd experiments
Training a new CIFAR-10 model:
python -u train.py \
--config configs/cifar10/glow-gaussian-uni.json \
--epochs 15000 --valid_epochs 10
--batch_size 512 --batch_steps 2 --eval_batch_size 1000 --init_batch_size 2048 \
--lr 0.001 --beta1 0.9 --beta2 0.999 --eps 1e-8 --warmup_steps 50 --weight_decay 1e-6 --grad_clip 0 \
--image_size 32 --n_bits 8 \
--data_path <data path> --model_path <model path>
The hyper-parameters for other datasets are provided in the paper.
- Config files, including refined version of Glow and MaCow, are provided here.
- The argument --batch_steps is used for accumulated gradients to trade speed for memory. The size of each segment of data batch is batch-size / (num_gpus * batch_steps).
- For distributed training on multi-GPUs, please use
distributed.pyorslurm.py, and refer to the pytorch distributed parallel training tutorial. - Please check details of arguments here.
We also implement the MaCow model with distributed training supported. To train a new MaCow model, please use the MaCow config files for different datasets.
@incollection{macow2019,
title = {MaCow: Masked Convolutional Generative Flow},
author = {Ma, Xuezhe and Kong, Xiang and Zhang, Shanghang and Hovy, Eduard},
booktitle = {Advances in Neural Information Processing Systems 33, (NeurIPS-2019)},
year = {2019},
publisher = {Curran Associates, Inc.}
}