diff --git a/docs/resnet/index.html b/docs/resnet/index.html index 4d975f2f..355c04ac 100644 --- a/docs/resnet/index.html +++ b/docs/resnet/index.html @@ -73,16 +73,16 @@
ResNets train layers as residual functions to overcome the degradation problem. The degradation problem is the accuracy of deep neural networks degrading when the number of layers becomes very high. The accuracy increases as the number of layers increase, then saturates, and then starts to degrade.
The paper argues that deeper models should perform at least as well as shallower models because the extra layers can just learn to perform an identity mapping.
If is the mapping that needs to be learned by a few layers, they train the residual function
--
instead. And the original function becomes .
-In this case, learning identity mapping for is equivalent to learning to be , which is easier to learn.
+If is the mapping that needs to be learned by a few layers, they train the residual function
++
instead. And the original function becomes .
+In this case, learning identity mapping for is equivalent to learning to be , which is easier to learn.
In the parameterized form this can be written as,
--
and when the feature map sizes of and are different the paper suggests doing a linear projection, with learned weights .
-+
+
and when the feature map sizes of and are different the paper suggests doing a linear projection, with learned weights .
+
Paper experimented with zero padding instead of linear projections and found linear projections to work better. Also when the feature map sizes match they found identity mapping to be better than linear projections.
-should have more than one layer, otherwise the sum also won't have non-linearities and will be like a linear layer.
+should have more than one layer, otherwise the sum also won't have non-linearities and will be like a linear layer.
Here is the training code for training a ResNet on CIFAR-10.
@@ -102,7 +102,7 @@ #This does the projection described above.
+This does the projection described above.
in_channels
- is the number of channels in out_channels
- is the number of channels in stride
- is the stride length in the convolution operation for . We do the same stride on the shortcut connection, to match the feature-map size.Convolution layer for linear projection
+Convolution layer for linear projection
out_channels
, where the out_channels
is higher than in_channels
- when we reduce the feature map size with a stride length greater than .
+ when we reduce the feature map size with a stride length greater than .
The second convolution layer maps from out_channels
to out_channels
and always has a stride length of 1.
in_channels
- is the number of channels in out_channels
is the number of output channels stride
@@ -302,7 +302,7 @@
- Shortcut connection should be a projection if the stride length is not of if the number of channels change
+Shortcut connection should be a projection if the stride length is not of if the number of channels change
This implements the bottleneck block described in the paper. It has , , and convolution layers.
+This implements the bottleneck block described in the paper. It has , , and convolution layers.
The first convolution layer maps from in_channels
to bottleneck_channels
- with a convolution, where the bottleneck_channels
+ with a convolution, where the bottleneck_channels
is lower than in_channels
.
The second convolution layer maps from bottleneck_channels
+
The second convolution layer maps from bottleneck_channels
to bottleneck_channels
-. This can have a stride length greater than when we want to compress the feature map size.
The third, final convolution layer maps to out_channels
+. This can have a stride length greater than when we want to compress the feature map size.
The third, final convolution layer maps to out_channels
. out_channels
is higher than in_channels
- if the stride length is greater than ; otherwise, is equal to in_channels
+ if the stride length is greater than ; otherwise, is equal to in_channels
.
bottleneck_channels
is less than in_channels
- and the convolution is performed on this shrunk space (hence the bottleneck). The two convolution decreases and increases the number of channels.
in_channels
- is the number of channels in bottleneck_channels
- is the number of channels for the convlution out_channels
is the number of output channels stride
- is the stride length in the convolution operation.First convolution layer, this maps to bottleneck_channels
+
First convolution layer, this maps to bottleneck_channels
Third convolution layer, this maps to out_channels
+
Third convolution layer, this maps to out_channels
.
Shortcut connection should be a projection if the stride length is not of if the number of channels change
+Shortcut connection should be a projection if the stride length is not of if the number of channels change