chainer implementation of pix2pix https://phillipi.github.io/pix2pix/
The Japanese readme can be found here.
From the left side: input, output, ground_truthpip install -r requirements.txt- Download the facade dataset (base set) http://cmp.felk.cvut.cz/~tylecr1/facade/
python train_facade.py -g [GPU ID, e.g. 0] -i [dataset root directory] --out [output directory] --snapshot_interval 10000- Wait a few hours...
--outstores snapshots of the model and example images at an interval defined by--snapshot_interval- If the model size is large, you can reduce
--snapshot_intervalto save resources.
- Gather image pairs (e.g. label + photo). Several hundred pairs are required for good results.
- Create a copy of
facade_dataset.pyfor your dataset. The function get_example should be written so that it returns the i-th image pair a tuple of numpy arrays i.e.(input, output). - It maybe necessary to update the loss function in
updater.py. - Likewise, make a copy of
facade_visualizer.pyand modify to visualize the dataset. - In
train_facade.pychangein_chandout_chto the correct input and output channels for your data.
