You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: official/resnet/README.md
+18-2Lines changed: 18 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,23 @@ See the following papers for more background:
8
8
9
9
[2][Identity Mappings in Deep Residual Networks](https://arxiv.org/pdf/1603.05027.pdf) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Jul 2016.
10
10
11
-
In code v1 refers to the resnet defined in [1], while v2 correspondingly refers to [2]. The principle difference between the two versions is that v1 applies batch normalization and activation after convolution, while v2 applies batch normalization, then activation, and finally convolution. A schematic comparison is presented in Figure 1 (left) of [2].
11
+
In code, v1 refers to the ResNet defined in [1] but where a stride 2 is used on
12
+
the 3x3 conv rather than the first 1x1 in the bottleneck. This change results
13
+
in higher and more stable accuracy with less epochs than the original v1 and has
14
+
shown to scale to higher batch sizes with minimal degradation in accuracy.
15
+
There is no originating paper and the first mention we are aware of was in the
16
+
[torch version of ResNetv1](https://github.com/facebook/fb.resnet.torch). Most
17
+
popular v1 implementations are this implementation which we call ResNetv1.5. In
18
+
testing we found v1.5 requires ~12% more compute to train and has 6% reduced
19
+
throughput for inference compared to ResNetv1. Comparing the v1 model to the
20
+
v1.5 model, which has happened in blog posts, is an apples-to-oranges
21
+
comparison especially in regards to hardware or platform performance. CIFAR-10
22
+
ResNet does not use the bottleneck and is not impacted by these nuances.
23
+
24
+
v2 refers to [2]. The principle difference between the two versions is that v1
25
+
applies batch normalization and activation after convolution, while v2 applies
26
+
batch normalization, then activation, and finally convolution. A schematic
27
+
comparison is presented in Figure 1 (left) of [2].
12
28
13
29
Please proceed according to which dataset you would like to train/evaluate on:
You can use a pretrained model to initialize a training process. In addition you are able to freeze all but the final fully connected layers to fine tune your model. Transfer Learning is useful when training on your own small datasets. For a brief look at transfer learning in the context of convolutional neural networks, we recommend reading these [short notes](http://cs231n.github.io/transfer-learning/).
96
+
You can use a pretrained model to initialize a training process. In addition you are able to freeze all but the final fully connected layers to fine tune your model. Transfer Learning is useful when training on your own small datasets. For a brief look at transfer learning in the context of convolutional neural networks, we recommend reading these [short notes](http://cs231n.github.io/transfer-learning/).
81
97
82
98
83
99
To fine tune a pretrained resnet you must make three changes to your training procedure:
0 commit comments