Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 3e37b2d

Browse files
committed
update readme
1 parent 2c8f57b commit 3e37b2d

File tree

1 file changed

+34
-33
lines changed

1 file changed

+34
-33
lines changed

README.md

Lines changed: 34 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,53 +1,54 @@
11
HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN
22
=====================================
33

4-
Code for ["HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN"](http://openaccess.thecvf.com/content_cvpr_2018/papers/Cao_HashGAN_Deep_Learning_CVPR_2018_paper.pdf).
4+
Code for CVPR 2018 Paper ["HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN"](http://openaccess.thecvf.com/content_cvpr_2018/papers/Cao_HashGAN_Deep_Learning_CVPR_2018_paper.pdf).
55

66

77
## Prerequisites
88

9-
- Python3, NumPy, TensorFlow-gpu, SciPy, Matplotlib, easydict, yacs, tqdm
9+
- Python3, NumPy, TensorFlow-gpu, SciPy, Matplotlib, OpenCV, easydict, yacs, tqdm
1010
- A recent NVIDIA GPU
1111

12+
We provide a `environment.yaml` for you and you can simplely use `conda env create -f environment.yml` to create the environment
13+
14+
```bash
15+
conda create --no-default-packages -n HashGAN python=3.6 && source activate HashGAN
16+
conda install -y numpy scipy matplotlib tensorflow-gpu opencv
17+
pip install easydict yacs tqdm pillow
18+
```
19+
1220
## Data Preparation
1321
In `data_list/` folder, we give three examples to show how to prepare image training data. If you want to add other datasets as the input, you need to prepare `train.txt`, `test.txt`, `database.txt` and `database_nolabel.txt` as CIFAR-10 dataset.
1422

15-
You can download the whole cifar10 dataset including the images and data list from [here](https://github.com/thulab/DeepHash/releases/download/v0.1/cifar10.zip), and unzip it to data/cifar10 folder.
23+
You can download the whole cifar10 dataset including the images and data list from [here](https://github.com/thulab/DeepHash/releases/download/v0.1/cifar10.zip), and unzip it to `data/cifar10` folder.
1624

17-
Make sure the tree of `/path/to/project/data/cifar10` looks like this:
25+
If you need run on NUSWIDE_81 and COCO, we recommend you to follow [here](https://github.com/thuml/HashNet/tree/master/pytorch#datasets) to prepare NUSWIDE_81 and COCO images.
1826

19-
```
20-
.
21-
|-- database.txt
22-
|-- database_nolabel.txt
23-
|-- test
24-
|-- test.txt
25-
|-- train
26-
`-- train.txt
27-
```
27+
## Pretrained Models
28+
You can download the pretrained models in the [release page](https://github.com/thuml/HashGAN/releases) and modify config file to use the pretrained models.
29+
30+
## Training
2831

29-
If you need run on NUSWIDE_81 and COCO, we recommend you to follow https://github.com/thuml/HashNet/tree/master/pytorch#datasets to prepare NUSWIDE_81 and COCO images.
32+
The training process can be divided into two step:
33+
1. Training a image generator.
34+
2. Fintune Alexnet using original labeled images and generated images.
3035

31-
## Models
32-
### TODO
36+
In `config` folder, we provide some examples to prepare yaml configuration.
3337

34-
- [ ] code refactor
35-
- [ ] wgan scale == 0
36-
- [ ] G, D architecture
37-
- [ ] evaluate mode
38-
- [ ] resume training
39-
- [ ] tensorboard
40-
- [ ] experiment
41-
- [ ] training longger
42-
- [ ] code release
43-
- [ ] pretrained G model
44-
- [ ] Pretrain model of Alexnet
45-
- [ ] rerun all process on a fresh machine
38+
```
39+
config
40+
├── cifar_evaluation.yaml
41+
├── cifar_step_1.yaml
42+
├── cifar_step_2.yaml
43+
└── nuswide_step_1.yaml
44+
```
45+
46+
You can run the model using command like the following:
4647

47-
Configuration for th models is specified in a list of constants at the top of
48-
the file, you can use the following command to run it:
48+
- `python main.py --cfg config/cifar_step_1.yaml --gpus 0`
49+
- `python main.py --cfg config/cifar_step_2.yaml --gpus 0`
4950

50-
- `python main.py`
51+
You can use tensorboard to monitor the training process such as losses and Mean Average Precision.
5152

5253
## Citation
5354
If you use this code for your research, please consider citing:
@@ -63,7 +64,7 @@ If you use this code for your research, please consider citing:
6364

6465
## Contact
6566
If you have any problem about our code, feel free to contact
66-
6767
68-
68+
69+
6970
or describe your problem in Issues.

0 commit comments

Comments
 (0)