This repository is an implementation of model described in StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation.
Model was trained on CelebA dataset. It can be found in Torchvision (or via link in main.ipynb).
Main file with is train/test loops in a notebook(main.ipynb). It also contains config so that all hyperparameters can be found there.
WandB loffing and report can be found on their web-page (report, logging). There are a lot of good and bad samples and graphs! (in Russian).
There are some good examples of trained models:
Also there were some discoveries and experiements, e.g. FID architectures comparison:
As a result of experiements IN are replaced with BN in Generator for now. Also ConvTranspose layers are replaced with Upsample+Conv layers as described here. Training was performed on Nvidia 2080 TI with pics resized to 128x128 and batch size of 32.
Thx mseitzer for implementing Inception.py for FID calculation.
Thx WanbB for convenient logging and beautiful report writing tool.