You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: official/resnet/README.md
+28-20Lines changed: 28 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,9 @@
1
1
# ResNet in TensorFlow
2
2
3
-
Deep residual networks, or ResNets for short, provided the breakthrough idea of identity mappings in order to enable training of very deep convolutional neural networks. This folder contains an implementation of ResNet for the ImageNet dataset written in TensorFlow.
3
+
Deep residual networks, or ResNets for short, provided the breakthrough idea of
4
+
identity mappings in order to enable training of very deep convolutional neural
5
+
networks. This folder contains an implementation of ResNet for the ImageNet
6
+
dataset written in TensorFlow.
4
7
5
8
See the following papers for more background:
6
9
@@ -12,14 +15,13 @@ In code, v1 refers to the ResNet defined in [1] but where a stride 2 is used on
12
15
the 3x3 conv rather than the first 1x1 in the bottleneck. This change results
13
16
in higher and more stable accuracy with less epochs than the original v1 and has
14
17
shown to scale to higher batch sizes with minimal degradation in accuracy.
15
-
There is no originating paper and the first mention we are aware of was in the
16
-
[torch version of ResNetv1](https://github.com/facebook/fb.resnet.torch). Most
17
-
popular v1 implementations are this implementation which we call ResNetv1.5. In
18
-
testing we found v1.5 requires ~12% more compute to train and has 6% reduced
19
-
throughput for inference compared to ResNetv1. Comparing the v1 model to the
20
-
v1.5 model, which has happened in blog posts, is an apples-to-oranges
21
-
comparison especially in regards to hardware or platform performance. CIFAR-10
22
-
ResNet does not use the bottleneck and is not impacted by these nuances.
18
+
There is no originating paper. The first mention we are aware of was in the
19
+
torch version of [ResNetv1](https://github.com/facebook/fb.resnet.torch). Most
20
+
popular v1 implementations are this implementation which we call ResNetv1.5.
21
+
22
+
In testing we found v1.5 requires ~12% more compute to train and has 6% reduced
23
+
throughput for inference compared to ResNetv1. CIFAR-10 ResNet does not use the
24
+
bottleneck and is thus the same for v1 as v1.5.
23
25
24
26
v2 refers to [2]. The principle difference between the two versions is that v1
25
27
applies batch normalization and activation after convolution, while v2 applies
@@ -38,14 +40,11 @@ First make sure you've [added the models folder to your Python path](/official/#
38
40
39
41
Then download and extract the CIFAR-10 data from Alex's website, specifying the location with the `--data_dir` flag. Run the following:
40
42
41
-
```
43
+
```bash
42
44
python cifar10_download_and_extract.py
43
-
```
44
-
45
-
Then to train the model, run the following:
46
-
47
-
```
45
+
# Then to train the model, run the following:
48
46
python cifar10_main.py
47
+
49
48
```
50
49
51
50
Use `--data_dir` to specify the location of the CIFAR-10 data used in the previous step. There are more flag options as described in `cifar10_main.py`.
@@ -54,23 +53,32 @@ Use `--data_dir` to specify the location of the CIFAR-10 data used in the previo
54
53
## ImageNet
55
54
56
55
### Setup
57
-
To begin, you will need to download the ImageNet dataset and convert it to TFRecord format. Follow along with the [Inception guide](https://github.com/tensorflow/models/tree/master/research/inception#getting-started) in order to prepare the dataset.
56
+
To begin, you will need to download the ImageNet dataset and convert it to
57
+
TFRecord format. The following [script](https://github.com/tensorflow/tpu/blob/master/tools/datasets/imagenet_to_gcs.py)
58
+
and [README](https://github.com/tensorflow/tpu/tree/master/tools/datasets#imagenet_to_gcspy)
59
+
provide a few options.
58
60
59
61
Once your dataset is ready, you can begin training the model as follows:
The model will begin training and will automatically evaluate itself on the validation data roughly once per epoch.
67
+
The model will begin training and will automatically evaluate itself on the
68
+
validation data roughly once per epoch.
66
69
67
-
Note that there are a number of other options you can specify, including `--model_dir` to choose where to store the model and `--resnet_size` to choose the model size (options include ResNet-18 through ResNet-200). See [`resnet.py`](resnet.py) for the full list of options.
70
+
Note that there are a number of other options you can specify, including
71
+
`--model_dir` to choose where to store the model and `--resnet_size` to choose
72
+
the model size (options include ResNet-18 through ResNet-200). See
73
+
[`resnet.py`](resnet.py) for the full list of options.
68
74
69
75
70
76
## Compute Devices
71
77
Training is accomplished using the DistributionStrategies API. (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/distribute/README.md)
72
78
73
-
The appropriate distribution strategy is chosen based on the `--num_gpus` flag. By default this flag is one if TensorFlow is compiled with CUDA, and zero otherwise.
79
+
The appropriate distribution strategy is chosen based on the `--num_gpus` flag.
80
+
By default this flag is one if TensorFlow is compiled with CUDA, and zero
0 commit comments