This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you use this code or our collected font dataset for your research, please cite:
Multi-Content GAN for Few-Shot Font Style Transfer; Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell, in arXiv, 2017.
- Linux or macOS
- Python 2.7
- CPU or NVIDIA GPU + CUDA CuDNN
- Install PyTorch and dependencies from http://pytorch.org
- Install Torch vision from the source.
git clone https://github.com/pytorch/vision
cd vision
python setup.py installpip install visdom
pip install dominate
pip install scikit-image- Clone this repo:
mkdir FontTransfer
cd FontTransfer
git clone https://github.com/azadis/MC-GAN
cd MC-GAN- Download our gray-scale 10K font data set:
./datasets/download_font_dataset.sh Capitals64../datasets/Capitals64/test_dict/dict.pkl makes observed random glyphs be similar at different test runs on Capitals64 dataset. It is a dictionary with font names as keys and random arrays containing indices from 0 to 26 as their values. Lengths of the arrays are equal to the number of non-observed glyphs in each font.
../datasets/Capitals64/BASE/Code New Roman.0.0.png is a fixed simple font used for training the conditional GAN in the End-to-End model.
- Download our collected in-the-wild font data set (downloaded from http://www6.flamingtext.com/All-Logos):
./datasets/download_font_dataset.sh public_web_fontsGiven a few letters of font ${DATA} for examples 5 letters {T,O,W,E,R}, training directory ${DATA}/A should contain 5 images each with dimension 64x(64x26)x3 where 5 - 1 = 4 letters are given and the rest are zeroed out. Each image should be saved as ${DATA}_${IND}.png where ${IND} is the index (in [0,26) ) of the letter omitted from the observed set. Training directory ${DATA}/B contains images each with dimension 64x64x3 where only the omitted letter is given. Image names are similar to the ones in ${DATA}/A though. ${DATA}/A/test/${DATA}.png contains all 5 given letters as a 64x(64x26)x3-dimensional image. Structure of the directories for above real-world fonts (including only a few observed letters) is as follows. One can refer to the examples in ../datasets/public_web_fonts for more information.
../datasets/public_web_fonts
└── ${DATA}/
├── A/
│ ├──train/${DATA}_${IND}.png
│ └──test/${DATA}.png
└── B/
├──train/${DATA}_${IND}.png
└──test/${DATA}.png
- (Optional) Download our synthetic color gradient font data set:
./datasets/download_font_dataset.sh Capitals_colorGrad64- Train Glyph Network:
./scripts/train_cGAN.sh Capitals64Model parameters will be saved under ./checkpoints/GlyphNet_pretrain.
- Test Glyph Network after specific numbers of epochs (e.g. 400 by setting
EPOCH=400in./scripts/test_cGAN.sh):
./scripts/test_cGAN.sh Capitals64- (Optional) View the generated images (e.g. after 400 epochs):
cd ./results/GlyphNet_pretrain/test_400/If you are running the code in your local machine, open index.html. If you are running remotely via ssh, on your remote machine run:
python -m SimpleHTTPServer 8881Then on your local machine, start an SSH tunnel: ssh -N -f -L localhost:8881:localhost:8881 remote_user@remote_host Now open your browser on the local machine and type in the address bar:
localhost:8881- (Optional) Plot loss functions values during training, from MC-GAN directory:
python util/plot_loss.py --logRoot ./checkpoints/GlyphNet_pretrain/- Train End-to-End network (e.g. on
DATA=ft37_1): You can train Glyph Network following instructions above or download our pre-trained model by running:
./pretrained_models/download_cGAN_models.shNow, you can train the full model:
./scripts/train_StackGAN.sh ${DATA}- Test End-to-End network:
./scripts/test_StackGAN.sh ${DATA}results will be saved under ./results/${DATA}_MCGAN_train.
- (Optional) Make a video from your results in different training epochs:
First, train your model and save model weights in every epoch by setting opt.save_epoch_freq=1 in scripts/train_StackGAN.sh. Then test in different epochs and make the video by:
./scripts/make_video.sh ${DATA}Follow the previous steps to visualize generated images and training curves where you replace GlyphNet_train with ${DATA}_StackGAN_train.
-
Flags: see
options/train_options.py,options/base_options.pyandoptions/test_options.pyfor explanations on each flag. -
Baselines: if you want to use this code to get results of Image Translation baseline or want to try tiling glyphs rather than stacking, refer to the end of
scripts/train_cGAN.sh. If you only want to train OrnaNet on top of clean glyphs, refer to the end ofscripts/train_StackGAN.sh. -
Image Dimension: We have tried this network only on
64x64images of letters. We do not scale and crop images since we set bothopt.FineSizeandopt.LoadSizeto64.
If you use this code or the provided dataset for your research, please cite our paper:
@inproceedings{azadi2018multi,
title={Multi-content gan for few-shot font style transfer},
author={Azadi, Samaneh and Fisher, Matthew and Kim, Vladimir and Wang, Zhaowen and Shechtman, Eli and Darrell, Trevor},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
volume={11},
pages={13},
year={2018}
}
We thank Elena Sizikova for downloading all fonts used in the 10K font data set.
Code is inspired by pytorch-CycleGAN-and-pix2pix.