Download the pretrained model and then generate images.
bash ./demo.sh
Yixiao Ge*, Zhuowan Li*, Haiyu Zhao, Guojun Yin, Shuai Yi, Xiaogang Wang, and Hongsheng Li
Neural Information Processing Systems (NIPS), 2018 (* equal contribution)
Pytorch implementation for our NIPS 2018 work. With the proposed siamese structure, we are able to learn identity-related and pose-unrelated representations.
- Python 3
- Pytorch (We run the code under version 0.3.1, maybe lower versions also work.)
pip install scipy, pillow, torchvision, sklearn, h5py, dominate, visdom
- Clone this repo:
git clone https://github.com/yxgeee/FD-GAN
cd FD-GAN/
We conduct experiments on Market1501, DukeMTMC, CUHK03 datasets. We need pose landmarks for each dataset during training, so we generate the pose files by Realtime Multi-Person Pose Estimation. And the raw datasets have been preprocessed by the code in open-reid. Download the prepared datasets following below steps:
- Create directories for datasets:
mkdir datasets
cd datasets/
- Download these datasets through the links above, and
unzipthem in the same root path.
As mentioned in the original paper, there are three stages for training our proposed framework.
bash ./demo.sh
And test best_net_E.pth by the same way as mentioned in Stage I.
Please cite our paper if you find the code useful for your research.
@inproceedings{ge2018fdgan,
title={FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification},
author={Ge, Yixiao and Li, Zhuowan and Zhao, Haiyu and Yin, Guojun and Wang, Xiaogang and Li, Hongsheng},
booktitle={Advances in Neural Information Processing Systems},
year={2018}
}
Our code is inspired by pytorch-CycleGAN-and-pix2pix and open-reid.