mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-26 08:41:23 +08:00
labml app links
This commit is contained in:
@ -1,5 +1,5 @@
|
||||
"""
|
||||
# [LabML Neural Networks](index.html)
|
||||
# [labml.ai Neural Networks](index.html)
|
||||
|
||||
This is a collection of simple PyTorch implementations of
|
||||
neural networks and related algorithms.
|
||||
|
@ -40,4 +40,4 @@ Here are [the training code](https://nn.labml.ai/transformers/compressive/experi
|
||||
model on the Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/compressive/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=0d9b5338726c11ebb7c80242ac1c0002)
|
||||
[](https://app.labml.ai/run/0d9b5338726c11ebb7c80242ac1c0002)
|
||||
|
@ -32,4 +32,4 @@ Here's [the training code](experiment.html) and a notebook for training a feedba
|
||||
[Colab Notebook](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/feedback/experiment.ipynb)
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/feedback/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=d8eb9416530a11eb8fb50242ac1c0002)
|
||||
[](https://app.labml.ai/run/d8eb9416530a11eb8fb50242ac1c0002)
|
||||
|
@ -37,7 +37,7 @@ We implemented a custom PyTorch function to improve performance.
|
||||
Here's [the training code](experiment.html) and a notebook for training a feedback transformer on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/feedback/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=d8eb9416530a11eb8fb50242ac1c0002)
|
||||
[](https://app.labml.ai/run/d8eb9416530a11eb8fb50242ac1c0002)
|
||||
"""
|
||||
|
||||
import math
|
||||
|
@ -13,7 +13,7 @@ where the keys and values are precalculated.
|
||||
Here's a Colab notebook for training a feedback transformer on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/feedback/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=d8eb9416530a11eb8fb50242ac1c0002)
|
||||
[](https://app.labml.ai/run/d8eb9416530a11eb8fb50242ac1c0002)
|
||||
"""
|
||||
|
||||
import torch
|
||||
|
@ -15,7 +15,7 @@ We try different variants for the [position-wise feedforward network](../feed_fo
|
||||
We decided to write a simpler implementation to make it easier for readers who are not familiar.*
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/glu_variants/simple.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=86b773f65fc911ebb2ac0242ac1c0002)
|
||||
[](https://app.labml.ai/run/86b773f65fc911ebb2ac0242ac1c0002)
|
||||
"""
|
||||
import dataclasses
|
||||
|
||||
|
@ -29,7 +29,7 @@ For the transformer we reuse the
|
||||
Here's a notebook for training a GPT model on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/gpt/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=0324c6d0562111eba65d0242ac1c0002)
|
||||
[](https://app.labml.ai/run/0324c6d0562111eba65d0242ac1c0002)
|
||||
"""
|
||||
|
||||
import torch
|
||||
|
@ -34,7 +34,7 @@ discusses dropping tokens when routing is not balanced.
|
||||
Here's [the training code](experiment.html) and a notebook for training a switch transformer on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/switch/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=c4656c605b9311eba13d0242ac1c0002)
|
||||
[](https://app.labml.ai/run/c4656c605b9311eba13d0242ac1c0002)
|
||||
"""
|
||||
|
||||
import torch
|
||||
|
@ -27,4 +27,4 @@ discusses dropping tokens when routing is not balanced.
|
||||
Here's [the training code](experiment.html) and a notebook for training a switch transformer on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/switch/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=c4656c605b9311eba13d0242ac1c0002)
|
||||
[](https://app.labml.ai/run/c4656c605b9311eba13d0242ac1c0002)
|
||||
|
@ -29,7 +29,7 @@ Annotated implementation of relative multi-headed attention is in [`relative_mha
|
||||
Here's [the training code](experiment.html) and a notebook for training a transformer XL model on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/xl/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=d3b6760c692e11ebb6a70242ac1c0002)
|
||||
[](https://app.labml.ai/run/d3b6760c692e11ebb6a70242ac1c0002)
|
||||
"""
|
||||
|
||||
|
||||
|
@ -21,4 +21,4 @@ Annotated implementation of relative multi-headed attention is in [`relative_mha
|
||||
Here's [the training code](https://nn.labml.ai/transformers/xl/experiment.html) and a notebook for training a transformer XL model on Tiny Shakespeare dataset.
|
||||
|
||||
[](https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/transformers/xl/experiment.ipynb)
|
||||
[](https://web.lab-ml.com/run?uuid=d3b6760c692e11ebb6a70242ac1c0002)
|
||||
[](https://app.labml.ai/run/d3b6760c692e11ebb6a70242ac1c0002)
|
||||
|
@ -1,7 +1,7 @@
|
||||
[](https://join.slack.com/t/labforml/shared_invite/zt-egj9zvq9-Dl3hhZqobexgT7aVKnD14g/)
|
||||
[](https://twitter.com/labmlai)
|
||||
|
||||
# [LabML Neural Networks](https://nn.labml.ai/index.html)
|
||||
# [labml.ai Neural Networks](https://nn.labml.ai/index.html)
|
||||
|
||||
This is a collection of simple PyTorch implementations of
|
||||
neural networks and related algorithms.
|
||||
|
Reference in New Issue
Block a user