This commit is contained in:
Varuna Jayasiri
2021-02-05 19:20:17 +05:30
parent 8168b04440
commit 13f36c18f6
6 changed files with 12 additions and 20 deletions

View File

@ -81,12 +81,10 @@ We believe these would help you understand these algorithms better.</p>
implementations.</p>
<h2>Modules</h2>
<h4><a href="transformers/index.html">Transformers</a></h4>
<p><a href="transformers/index.html">Transformers module</a>
contains implementations for
<a href="transformers/mha.html">multi-headed attention</a>
and
<a href="transformers/relative_mha.html">relative multi-headed attention</a>.</p>
<ul>
<li><a href="transformers/mha.html">Multi-headed attention</a></li>
<li><a href="transformers/models.html">Transformer building blocks</a></li>
<li><a href="transformers/xl/relative_mha.html">Relative multi-headed attention</a>.</li>
<li><a href="transformers/gpt/index.html">GPT Architecture</a></li>
<li><a href="transformers/glu_variants/simple.html">GLU Variants</a></li>
<li><a href="transformers/knn/index.html">kNN-LM: Generalization through Memorization</a></li>

View File

@ -78,7 +78,7 @@ from paper <a href="https://arxiv.org/abs/1706.03762">Attention Is All You Need<
and derivatives and enhancements of it.</p>
<ul>
<li><a href="mha.html">Multi-head attention</a></li>
<li><a href="relative_mha.html">Relative multi-head attention</a></li>
<li><a href="xl/relative_mha.html">Relative multi-head attention</a></li>
<li><a href="models.html">Transformer Encoder and Decoder Models</a></li>
<li><a href="positional_encoding.html">Fixed positional encoding</a></li>
</ul>

View File

@ -15,12 +15,9 @@ implementations.
#### ✨ [Transformers](transformers/index.html)
[Transformers module](transformers/index.html)
contains implementations for
[multi-headed attention](transformers/mha.html)
and
[relative multi-headed attention](transformers/relative_mha.html).
* [Multi-headed attention](transformers/mha.html)
* [Transformer building blocks](transformers/models.html)
* [Relative multi-headed attention](transformers/xl/relative_mha.html).
* [GPT Architecture](transformers/gpt/index.html)
* [GLU Variants](transformers/glu_variants/simple.html)
* [kNN-LM: Generalization through Memorization](transformers/knn/index.html)

View File

@ -14,7 +14,7 @@ from paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762),
and derivatives and enhancements of it.
* [Multi-head attention](mha.html)
* [Relative multi-head attention](relative_mha.html)
* [Relative multi-head attention](xl/relative_mha.html)
* [Transformer Encoder and Decoder Models](models.html)
* [Fixed positional encoding](positional_encoding.html)

View File

@ -21,12 +21,9 @@ implementations almost weekly.
#### ✨ [Transformers](https://nn.labml.ai/transformers/index.html)
[Transformers module](https://nn.labml.ai/transformers/index.html)
contains implementations for
[multi-headed attention](https://nn.labml.ai/transformers/mha.html)
and
[relative multi-headed attention](https://nn.labml.ai/transformers/relative_mha.html).
* [Multi-headed attention](https://nn.labml.ai/transformers/mha.html)
* [Transformer building blocks](https://nn.labml.ai/transformers/models.html)
* [Relative multi-headed attention](https://nn.labml.ai/transformers/xl/relative_mha.html).
* [GPT Architecture](https://nn.labml.ai/transformers/gpt/index.html)
* [GLU Variants](https://nn.labml.ai/transformers/glu_variants/simple.html)
* [kNN-LM: Generalization through Memorization](https://nn.labml.ai/transformers/knn)

View File

@ -5,7 +5,7 @@ with open("readme.md", "r") as f:
setuptools.setup(
name='labml-nn',
version='0.4.85',
version='0.4.86',
author="Varuna Jayasiri, Nipun Wijerathne",
author_email="vpjayasiri@gmail.com, hnipun@gmail.com",
description="A collection of PyTorch implementations of neural network architectures and layers.",