mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-26 08:41:23 +08:00
papers list
This commit is contained in:
@ -69,7 +69,7 @@
|
||||
</div>
|
||||
<h1>Compressive Transformer</h1>
|
||||
<p>This is an implementation of
|
||||
<a href="https://arxiv.org/abs/1911.05507">Compressive Transformers for Long-Range Sequence Modelling</a>
|
||||
<a href="https://papers.labml.ai/paper/1911.05507">Compressive Transformers for Long-Range Sequence Modelling</a>
|
||||
in <a href="https://pytorch.org">PyTorch</a>.</p>
|
||||
<p>This is an extension of <a href="../xl/index.html">Transformer XL</a> where past memories
|
||||
are compressed to give a longer attention range.
|
||||
|
@ -69,7 +69,7 @@
|
||||
</div>
|
||||
<h1><a href="https://nn.labml.ai/transformers/compressive/index.html">Compressive Transformer</a></h1>
|
||||
<p>This is an implementation of
|
||||
<a href="https://arxiv.org/abs/1911.05507">Compressive Transformers for Long-Range Sequence Modelling</a>
|
||||
<a href="https://papers.labml.ai/paper/1911.05507">Compressive Transformers for Long-Range Sequence Modelling</a>
|
||||
in <a href="https://pytorch.org">PyTorch</a>.</p>
|
||||
<p>This is an extension of <a href="https://nn.labml.ai/transformers/xl/index.html">Transformer XL</a> where past memories
|
||||
are compressed to give a longer attention range.
|
||||
|
Reference in New Issue
Block a user