mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-14 01:13:00 +08:00
✍️ typos
This commit is contained in:
@ -3,12 +3,12 @@
|
||||
<head>
|
||||
<meta http-equiv="content-type" content="text/html;charset=utf-8"/>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
|
||||
<meta name="description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule networks is neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
<meta name="description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule network is a neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:image:src" content="https://avatars1.githubusercontent.com/u/64068543?s=400&v=4"/>
|
||||
<meta name="twitter:title" content="Capsule Networks"/>
|
||||
<meta name="twitter:description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule networks is neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
<meta name="twitter:description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule network is a neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
<meta name="twitter:site" content="@labmlai"/>
|
||||
<meta name="twitter:creator" content="@labmlai"/>
|
||||
|
||||
@ -18,7 +18,7 @@
|
||||
<meta property="og:site_name" content="LabML Neural Networks"/>
|
||||
<meta property="og:type" content="object"/>
|
||||
<meta property="og:title" content="Capsule Networks"/>
|
||||
<meta property="og:description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule networks is neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
<meta property="og:description" content="PyTorch implementation and tutorial of Capsule Networks. Capsule network is a neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules."/>
|
||||
|
||||
<title>Capsule Networks</title>
|
||||
<link rel="shortcut icon" href="/icon.png"/>
|
||||
@ -74,15 +74,15 @@
|
||||
<h1>Capsule Networks</h1>
|
||||
<p>This is a <a href="https://pytorch.org">PyTorch</a> implementation/tutorial of
|
||||
<a href="https://arxiv.org/abs/1710.09829">Dynamic Routing Between Capsules</a>.</p>
|
||||
<p>Capsule networks is neural network architecture that embeds features
|
||||
<p>Capsule network is a neural network architecture that embeds features
|
||||
as capsules and routes them with a voting mechanism to next layer of capsules.</p>
|
||||
<p>Unlike in other implementations of models, we’ve included a sample, because
|
||||
it is difficult to understand some of the concepts with just the modules.
|
||||
<a href="mnist.html">This is the annotated code for a model that use capsules to classify MNIST dataset</a></p>
|
||||
<a href="mnist.html">This is the annotated code for a model that uses capsules to classify MNIST dataset</a></p>
|
||||
<p>This file holds the implementations of the core modules of Capsule Networks.</p>
|
||||
<p>I used <a href="https://github.com/jindongwang/Pytorch-CapsuleNet">jindongwang/Pytorch-CapsuleNet</a> to clarify some
|
||||
confusions I had with the paper.</p>
|
||||
<p>Here’s a notebook for training a Capsule Networks on MNIST dataset.</p>
|
||||
<p>Here’s a notebook for training a Capsule Network on MNIST dataset.</p>
|
||||
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/capsule_networks/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
|
||||
<a href="https://web.lab-ml.com/run?uuid=e7c08e08586711ebb3e30242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
|
||||
</div>
|
||||
@ -351,12 +351,12 @@ The length of each output capsule is the probability that class is present in th
|
||||
\lambda (1 - T_k) \max(0, \lVert\mathbf{v}_k\rVert - m^{-})^2</script>
|
||||
</p>
|
||||
<p>$T_k$ is $1$ if the class $k$ is present and $0$ otherwise.
|
||||
The first component of the loss is $0$ when if the class is not present,
|
||||
and the second component is $0$ is the class is present.
|
||||
The first component of the loss is $0$ when the class is not present,
|
||||
and the second component is $0$ if the class is present.
|
||||
The $\max(0, x)$ is used to avoid predictions going to extremes.
|
||||
$m^{+}$ is set to be $0.9$ and $m^{-}$ to be $0.1$ in the paper.</p>
|
||||
<p>The $\lambda$ down-weighting is used to stop the length of all capsules from
|
||||
fallind during the initial phase of training.</p>
|
||||
falling during the initial phase of training.</p>
|
||||
</div>
|
||||
<div class='code'>
|
||||
<div class="highlight"><pre><span class="lineno">137</span><span class="k">class</span> <span class="nc">MarginLoss</span><span class="p">(</span><span class="n">Module</span><span class="p">):</span></pre></div>
|
||||
|
@ -85,8 +85,8 @@ could be in distribution $\mathcal{N}(0.5, 1)$.
|
||||
Then, after some training steps, it could move to $\mathcal{N}(0.5, 1)$.
|
||||
This is <em>internal covariate shift</em>.</p>
|
||||
<p>Internal covariate shift will adversely affect training speed because the later layers
|
||||
($l_2$ in the above example) has to adapt to this shifted distribution.</p>
|
||||
<p>By stabilizing the distribution batch normalization minimizes the internal covariate shift.</p>
|
||||
($l_2$ in the above example) have to adapt to this shifted distribution.</p>
|
||||
<p>By stabilizing the distribution, batch normalization minimizes the internal covariate shift.</p>
|
||||
<h2>Normalization</h2>
|
||||
<p>It is known that whitening improves training speed and convergence.
|
||||
<em>Whitening</em> is linearly transforming inputs to have zero mean, unit variance,
|
||||
@ -95,9 +95,9 @@ and be uncorrelated.</p>
|
||||
<p>Normalizing outside the gradient computation using pre-computed (detached)
|
||||
means and variances doesn’t work. For instance. (ignoring variance), let
|
||||
<script type="math/tex; mode=display">\hat{x} = x - \mathbb{E}[x]</script>
|
||||
where $x = u + b$ and $b$ is a trained bias.
|
||||
and $\mathbb{E}[x]$ is outside gradient computation (pre-computed constant).</p>
|
||||
<p>Note that $\hat{x}$ has no effect of $b$.
|
||||
where $x = u + b$ and $b$ is a trained bias
|
||||
and $\mathbb{E}[x]$ is an outside gradient computation (pre-computed constant).</p>
|
||||
<p>Note that $\hat{x}$ has no effect on $b$.
|
||||
Therefore,
|
||||
$b$ will increase or decrease based
|
||||
$\frac{\partial{\mathcal{L}}}{\partial x}$,
|
||||
@ -106,14 +106,14 @@ The paper notes that similar explosions happen with variances.</p>
|
||||
<h3>Batch Normalization</h3>
|
||||
<p>Whitening is computationally expensive because you need to de-correlate and
|
||||
the gradients must flow through the full whitening calculation.</p>
|
||||
<p>The paper introduces simplified version which they call <em>Batch Normalization</em>.
|
||||
<p>The paper introduces a simplified version which they call <em>Batch Normalization</em>.
|
||||
First simplification is that it normalizes each feature independently to have
|
||||
zero mean and unit variance:
|
||||
<script type="math/tex; mode=display">\hat{x}^{(k)} = \frac{x^{(k)} - \mathbb{E}[x^{(k)}]}{\sqrt{Var[x^{(k)}]}}</script>
|
||||
where $x = (x^{(1)} … x^{(d)})$ is the $d$-dimensional input.</p>
|
||||
<p>The second simplification is to use estimates of mean $\mathbb{E}[x^{(k)}]$
|
||||
and variance $Var[x^{(k)}]$ from the mini-batch
|
||||
for normalization; instead of calculating the mean and variance across whole dataset.</p>
|
||||
for normalization; instead of calculating the mean and variance across the whole dataset.</p>
|
||||
<p>Normalizing each feature to zero mean and unit variance could affect what the layer
|
||||
can represent.
|
||||
As an example paper illustrates that, if the inputs to a sigmoid are normalized
|
||||
@ -126,8 +126,8 @@ where $y^{(k)}$ is the output of the batch normalization layer.</p>
|
||||
like $Wu + b$ the bias parameter $b$ gets cancelled due to normalization.
|
||||
So you can and should omit bias parameter in linear transforms right before the
|
||||
batch normalization.</p>
|
||||
<p>Batch normalization also makes the back propagation invariant to the scale of the weights.
|
||||
And empirically it improves generalization, so it has regularization effects too.</p>
|
||||
<p>Batch normalization also makes the back propagation invariant to the scale of the weights
|
||||
and empirically it improves generalization, so it has regularization effects too.</p>
|
||||
<h2>Inference</h2>
|
||||
<p>We need to know $\mathbb{E}[x^{(k)}]$ and $Var[x^{(k)}]$ in order to
|
||||
perform the normalization.
|
||||
@ -136,7 +136,7 @@ and find the mean and variance, or you can use an estimate calculated during tra
|
||||
The usual practice is to calculate an exponential moving average of
|
||||
mean and variance during the training phase and use that for inference.</p>
|
||||
<p>Here’s <a href="mnist.html">the training code</a> and a notebook for training
|
||||
a CNN classifier that use batch normalization for MNIST dataset.</p>
|
||||
a CNN classifier that uses batch normalization for MNIST dataset.</p>
|
||||
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/normalization/batch_norm/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
|
||||
<a href="https://web.lab-ml.com/run?uuid=011254fe647011ebbb8e0242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
|
||||
</div>
|
||||
@ -251,7 +251,7 @@ mean $\mathbb{E}[x^{(k)}]$ and variance $Var[x^{(k)}]$</p>
|
||||
<a href='#section-6'>#</a>
|
||||
</div>
|
||||
<p><code>x</code> is a tensor of shape <code>[batch_size, channels, *]</code>.
|
||||
<code>*</code> could be any number of (even 0) dimensions.
|
||||
<code>*</code> denotes any number of (possibly 0) dimensions.
|
||||
For example, in an image (2D) convolution this will be
|
||||
<code>[batch_size, channels, height, width]</code></p>
|
||||
</div>
|
||||
@ -286,7 +286,7 @@ mean $\mathbb{E}[x^{(k)}]$ and variance $Var[x^{(k)}]$</p>
|
||||
<div class='section-link'>
|
||||
<a href='#section-9'>#</a>
|
||||
</div>
|
||||
<p>Sanity check to make sure the number of features is same</p>
|
||||
<p>Sanity check to make sure the number of features is the same</p>
|
||||
</div>
|
||||
<div class='code'>
|
||||
<div class="highlight"><pre><span class="lineno">174</span> <span class="k">assert</span> <span class="bp">self</span><span class="o">.</span><span class="n">channels</span> <span class="o">==</span> <span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span></pre></div>
|
||||
|
@ -85,14 +85,14 @@
|
||||
|
||||
<url>
|
||||
<loc>https://nn.labml.ai/normalization/layer_norm/index.html</loc>
|
||||
<lastmod>2021-02-02T16:30:00+00:00</lastmod>
|
||||
<lastmod>2021-02-12T16:30:00+00:00</lastmod>
|
||||
<priority>1.00</priority>
|
||||
</url>
|
||||
|
||||
|
||||
<url>
|
||||
<loc>https://nn.labml.ai/normalization/layer_norm/readme.html</loc>
|
||||
<lastmod>2021-02-02T16:30:00+00:00</lastmod>
|
||||
<lastmod>2021-02-12T16:30:00+00:00</lastmod>
|
||||
<priority>1.00</priority>
|
||||
</url>
|
||||
|
||||
@ -477,7 +477,7 @@
|
||||
|
||||
<url>
|
||||
<loc>https://nn.labml.ai/capsule_networks/index.html</loc>
|
||||
<lastmod>2021-01-30T16:30:00+00:00</lastmod>
|
||||
<lastmod>2021-02-12T16:30:00+00:00</lastmod>
|
||||
<priority>1.00</priority>
|
||||
</url>
|
||||
|
||||
|
@ -22,9 +22,9 @@ Then, after some training steps, it could move to $\mathcal{N}(0.5, 1)$.
|
||||
This is *internal covariate shift*.
|
||||
|
||||
Internal covariate shift will adversely affect training speed because the later layers
|
||||
($l_2$ in the above example) has to adapt to this shifted distribution.
|
||||
($l_2$ in the above example) have to adapt to this shifted distribution.
|
||||
|
||||
By stabilizing the distribution batch normalization minimizes the internal covariate shift.
|
||||
By stabilizing the distribution, batch normalization minimizes the internal covariate shift.
|
||||
|
||||
## Normalization
|
||||
|
||||
@ -37,10 +37,10 @@ and be uncorrelated.
|
||||
Normalizing outside the gradient computation using pre-computed (detached)
|
||||
means and variances doesn't work. For instance. (ignoring variance), let
|
||||
$$\hat{x} = x - \mathbb{E}[x]$$
|
||||
where $x = u + b$ and $b$ is a trained bias.
|
||||
and $\mathbb{E}[x]$ is outside gradient computation (pre-computed constant).
|
||||
where $x = u + b$ and $b$ is a trained bias
|
||||
and $\mathbb{E}[x]$ is an outside gradient computation (pre-computed constant).
|
||||
|
||||
Note that $\hat{x}$ has no effect of $b$.
|
||||
Note that $\hat{x}$ has no effect on $b$.
|
||||
Therefore,
|
||||
$b$ will increase or decrease based
|
||||
$\frac{\partial{\mathcal{L}}}{\partial x}$,
|
||||
@ -52,7 +52,7 @@ The paper notes that similar explosions happen with variances.
|
||||
Whitening is computationally expensive because you need to de-correlate and
|
||||
the gradients must flow through the full whitening calculation.
|
||||
|
||||
The paper introduces simplified version which they call *Batch Normalization*.
|
||||
The paper introduces a simplified version which they call *Batch Normalization*.
|
||||
First simplification is that it normalizes each feature independently to have
|
||||
zero mean and unit variance:
|
||||
$$\hat{x}^{(k)} = \frac{x^{(k)} - \mathbb{E}[x^{(k)}]}{\sqrt{Var[x^{(k)}]}}$$
|
||||
@ -60,7 +60,7 @@ where $x = (x^{(1)} ... x^{(d)})$ is the $d$-dimensional input.
|
||||
|
||||
The second simplification is to use estimates of mean $\mathbb{E}[x^{(k)}]$
|
||||
and variance $Var[x^{(k)}]$ from the mini-batch
|
||||
for normalization; instead of calculating the mean and variance across whole dataset.
|
||||
for normalization; instead of calculating the mean and variance across the whole dataset.
|
||||
|
||||
Normalizing each feature to zero mean and unit variance could affect what the layer
|
||||
can represent.
|
||||
@ -76,8 +76,8 @@ like $Wu + b$ the bias parameter $b$ gets cancelled due to normalization.
|
||||
So you can and should omit bias parameter in linear transforms right before the
|
||||
batch normalization.
|
||||
|
||||
Batch normalization also makes the back propagation invariant to the scale of the weights.
|
||||
And empirically it improves generalization, so it has regularization effects too.
|
||||
Batch normalization also makes the back propagation invariant to the scale of the weights
|
||||
and empirically it improves generalization, so it has regularization effects too.
|
||||
|
||||
## Inference
|
||||
|
||||
@ -89,7 +89,7 @@ The usual practice is to calculate an exponential moving average of
|
||||
mean and variance during the training phase and use that for inference.
|
||||
|
||||
Here's [the training code](mnist.html) and a notebook for training
|
||||
a CNN classifier that use batch normalization for MNIST dataset.
|
||||
a CNN classifier that uses batch normalization for MNIST dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/normalization/batch_norm/mnist.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=011254fe647011ebbb8e0242ac1c0002)
|
||||
@ -162,7 +162,7 @@ class BatchNorm(Module):
|
||||
def forward(self, x: torch.Tensor):
|
||||
"""
|
||||
`x` is a tensor of shape `[batch_size, channels, *]`.
|
||||
`*` could be any number of (even 0) dimensions.
|
||||
`*` denotes any number of (possibly 0) dimensions.
|
||||
For example, in an image (2D) convolution this will be
|
||||
`[batch_size, channels, height, width]`
|
||||
"""
|
||||
@ -170,7 +170,7 @@ class BatchNorm(Module):
|
||||
x_shape = x.shape
|
||||
# Get the batch size
|
||||
batch_size = x_shape[0]
|
||||
# Sanity check to make sure the number of features is same
|
||||
# Sanity check to make sure the number of features is the same
|
||||
assert self.channels == x.shape[1]
|
||||
|
||||
# Reshape into `[batch_size, channels, n]`
|
||||
|
Reference in New Issue
Block a user