experiment links

This commit is contained in:
Varuna Jayasiri
2022-06-22 17:23:28 +05:30
parent f169f3a71d
commit 33ced53e52
15 changed files with 41 additions and 34 deletions

View File

@ -70,10 +70,10 @@
<a href='#section-0'>#</a>
</div>
<h1><a href="index.html">Denoising Diffusion Probabilistic Models (DDPM)</a> training</h1>
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a> <a href="https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels"><img alt="Open In Comet" src="https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model"></a></p>
<p>This trains a DDPM based model on CelebA HQ dataset. You can find the download instruction in this <a href="https://forums.fast.ai/t/download-celeba-hq-dataset/45873/3">discussion on fast.ai</a>. Save the images inside <a href="#dataset_path"><code class="highlight"><span></span><span class="n">data</span><span class="o">/</span><span class="n">celebA</span></code>
folder</a>.</p>
<p>The paper had used a exponential moving average of the model with a decay of <span class="katex"><span aria-hidden="true" class="katex-html"><span class="base"><span class="strut" style="height:0.64444em;vertical-align:0em;"></span><span class="mord">0.9999</span></span></span></span>. We have skipped this for simplicity.</p>
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a> <a href="https://www.comet.ml/labml/diffuse/1260757bcd6148e084ad3a46c38ac5c4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step"><img alt="Open In Comet" src="https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model"></a></p>
</div>
<div class='code'>

File diff suppressed because one or more lines are too long

View File

@ -70,10 +70,10 @@
<a href='#section-0'>#</a>
</div>
<h1><a href="https://nn.labml.ai/diffusion/ddpm/index.html">Denoising Diffusion Probabilistic Models (DDPM)</a></h1>
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a> <a href="https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels"><img alt="Open In Comet" src="https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model"></a></p>
<p>This is a <a href="https://pytorch.org">PyTorch</a> implementation/tutorial of the paper <a href="https://papers.labml.ai/paper/2006.11239">Denoising Diffusion Probabilistic Models</a>.</p>
<p>In simple terms, we get an image from data and add noise step by step. Then We train a model to predict that noise at each step and use the model to generate images.</p>
<p>Here is the <a href="https://nn.labml.ai/diffusion/ddpm/unet.html">UNet model</a> that predicts the noise and <a href="https://nn.labml.ai/diffusion/ddpm/experiment.html">training code</a>. <a href="https://nn.labml.ai/diffusion/ddpm/evaluate.html">This file</a> can generate samples and interpolations from a trained model.</p>
<p><a href="https://app.labml.ai/run/a44333ea251411ec8007d1a1762ed686"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen"></a> </p>
<p>Here is the <a href="https://nn.labml.ai/diffusion/ddpm/unet.html">UNet model</a> that predicts the noise and <a href="https://nn.labml.ai/diffusion/ddpm/experiment.html">training code</a>. <a href="https://nn.labml.ai/diffusion/ddpm/evaluate.html">This file</a> can generate samples and interpolations from a trained model. </p>
</div>
<div class='code'>

View File

@ -372,7 +372,7 @@
<url>
<loc>https://nn.labml.ai/diffusion/ddpm/unet.html</loc>
<lastmod>2021-10-24T16:30:00+00:00</lastmod>
<lastmod>2022-06-09T16:30:00+00:00</lastmod>
<priority>1.00</priority>
</url>
@ -400,7 +400,7 @@
<url>
<loc>https://nn.labml.ai/diffusion/ddpm/evaluate.html</loc>
<lastmod>2021-10-24T16:30:00+00:00</lastmod>
<lastmod>2022-06-09T16:30:00+00:00</lastmod>
<priority>1.00</priority>
</url>

View File

@ -8,6 +8,9 @@ summary: >
# Fuzzy Tiling Activations (FTA)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/fta/69be11f83693407f82a86dcbb232bcfe?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&viewId=rlJOpXDGtL8zbkcX66R77P5me&xAxis=step)
This is a [PyTorch](https://pytorch.org) implementation/tutorial of
[Fuzzy Tiling Activations: A Simple Approach to Learning Sparse Representations Online](https://papers.labml.ai/paper/aca66d8edc8911eba3db37f65e372566).
@ -54,9 +57,6 @@ FTA uses this to create soft boundaries between bins.
$$\phi_\eta(z) = 1 - I_{\eta,+} \big( \max(\mathbf{c} - z, 0) + \max(z - \delta - \mathbf{c}, 0) \big)$$
[Here's a simple experiment](experiment.html) that uses FTA in a transformer.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/fta/69be11f83693407f82a86dcbb232bcfe?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&viewId=rlJOpXDGtL8zbkcX66R77P5me&xAxis=step)
"""
import torch

View File

@ -7,6 +7,9 @@ summary: >
# [Fuzzy Tiling Activation](index.html) Experiment
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/fta/69be11f83693407f82a86dcbb232bcfe?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&viewId=rlJOpXDGtL8zbkcX66R77P5me&xAxis=step)
Here we train a transformer that uses [Fuzzy Tiling Activation](index.html) in the
[Feed-Forward Network](../../transformers/feed_forward.html).
We use it for a language model and train it on Tiny Shakespeare dataset
@ -14,9 +17,6 @@ for demonstration.
However, this is probably not the ideal task for FTA, and we
believe FTA is more suitable for modeling data with continuous variables.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/activations/fta/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/fta/69be11f83693407f82a86dcbb232bcfe?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&viewId=rlJOpXDGtL8zbkcX66R77P5me&xAxis=step)
"""
import copy

View File

@ -28,7 +28,6 @@ Here's a notebook for training a Capsule Network on MNIST dataset.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/capsule_networks/mnist.ipynb)
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/e7c08e08586711ebb3e30242ac1c0002)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/capsule-networks/reports/capsule-networks)
"""
import torch.nn as nn

View File

@ -10,8 +10,6 @@ This is an annotated PyTorch code to classify MNIST digits with PyTorch.
This paper implements the experiment described in paper
[Dynamic Routing Between Capsules](https://papers.labml.ai/paper/1710.09829).
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=mnist)](https://www.comet.ml/labml/capsule-networks/reports/capsule-networks)
"""
from typing import Any

View File

@ -8,6 +8,9 @@ summary: >
# Denoising Diffusion Probabilistic Models (DDPM)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels)
This is a [PyTorch](https://pytorch.org) implementation/tutorial of the paper
[Denoising Diffusion Probabilistic Models](https://papers.labml.ai/paper/2006.11239).
@ -156,9 +159,6 @@ training.
Here is the [UNet model](unet.html) that gives $\textcolor{lightgreen}{\epsilon_\theta}(x_t, t)$ and
[training code](experiment.html).
[This file](evaluate.html) can generate samples and interpolations from a trained model.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/1260757bcd6148e084ad3a46c38ac5c4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
"""
from typing import Tuple, Optional

View File

@ -11,7 +11,7 @@
"source": [
"[![Github](https://img.shields.io/github/stars/labmlai/annotated_deep_learning_paper_implementations?style=social)](https://github.com/labmlai/annotated_deep_learning_paper_implementations)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)\n",
"[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/1260757bcd6148e084ad3a46c38ac5c4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)\n",
"[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels)\n",
"\n",
"## [Denoising Diffusion Probabilistic Models (DDPM)](https://nn.labml.ai/diffusion/ddpm/index.html)\n",
"\n",
@ -201,7 +201,11 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Initializ"
]
@ -209,7 +213,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"configs.init()"
@ -282,7 +290,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": []
}

View File

@ -8,15 +8,15 @@ summary: >
# [Denoising Diffusion Probabilistic Models (DDPM)](index.html) training
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels)
This trains a DDPM based model on CelebA HQ dataset. You can find the download instruction in this
[discussion on fast.ai](https://forums.fast.ai/t/download-celeba-hq-dataset/45873/3).
Save the images inside [`data/celebA` folder](#dataset_path).
The paper had used a exponential moving average of the model with a decay of $0.9999$. We have skipped this for
simplicity.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/1260757bcd6148e084ad3a46c38ac5c4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
"""
from typing import List

View File

@ -1,5 +1,8 @@
# [Denoising Diffusion Probabilistic Models (DDPM)](https://nn.labml.ai/diffusion/ddpm/index.html)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/diffusion/ddpm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=capsule_networks&file=model)](https://www.comet.ml/labml/diffuse/view/FknjSiKWotr8fgZerpC1sV1cy/panels)
This is a [PyTorch](https://pytorch.org) implementation/tutorial of the paper
[Denoising Diffusion Probabilistic Models](https://papers.labml.ai/paper/2006.11239).
@ -11,5 +14,3 @@ Here is the [UNet model](https://nn.labml.ai/diffusion/ddpm/unet.html) that pred
[training code](https://nn.labml.ai/diffusion/ddpm/experiment.html).
[This file](https://nn.labml.ai/diffusion/ddpm/evaluate.html) can generate samples and interpolations
from a trained model.
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/a44333ea251411ec8007d1a1762ed686)

View File

@ -7,6 +7,9 @@ summary: >
# DeepNorm
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/deep_norm/experiment.ipynb)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=deep_norm&file=model)](https://www.comet.ml/labml/deep-norm/61d817f80ff143c8825fba4aacd431d4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
This is a [PyTorch](https://pytorch.org) implementation of
the DeepNorm from the paper
[DeepNet: Scaling Transformers to 1,000 Layers](https://papers.labml.ai/paper/2203.00555).
@ -66,10 +69,6 @@ Where $N$ is the number of layers in the encoder and $M$ is the number of layers
Refer to [the paper](https://papers.labml.ai/paper/2203.00555) for derivation.
[Here is an experiment implementation](experiment.html) that uses DeepNorm.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/deep_norm/experiment.ipynb)
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/ec8e4dacb7f311ec8d1cd37d50b05c3d)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=deep_norm&file=model)](https://www.comet.ml/labml/deep-norm/61d817f80ff143c8825fba4aacd431d4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
"""
from typing import Union, List

View File

@ -11,7 +11,6 @@
"source": [
"[![Github](https://img.shields.io/github/stars/labmlai/annotated_deep_learning_paper_implementations?style=social)](https://github.com/labmlai/annotated_deep_learning_paper_implementations)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/deep_norm/experiment.ipynb)\n",
"[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/ec8e4dacb7f311ec8d1cd37d50b05c3d)\n",
"[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=deep_norm&file=colab)](https://www.comet.ml/labml/deep-norm/61d817f80ff143c8825fba4aacd431d4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)\n",
"\n",
"## DeepNorm\n",

View File

@ -8,7 +8,6 @@ summary: >
# [DeepNorm](index.html) Experiment
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/deep_norm/experiment.ipynb)
[![View Run](https://img.shields.io/badge/labml-experiment-brightgreen)](https://app.labml.ai/run/ec8e4dacb7f311ec8d1cd37d50b05c3d)
[![Open In Comet](https://images.labml.ai/images/comet.svg?experiment=deep_norm&file=experiment)](https://www.comet.ml/labml/deep-norm/61d817f80ff143c8825fba4aacd431d4?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step)
"""