mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-08-14 09:31:42 +08:00
<!DOCTYPE html> <html> <head> <meta http-equiv="content-type" content="text/html;charset=utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta name="description" content=""/> <meta name="twitter:card" content="summary"/> <meta name="twitter:image:src" content="https://avatars1.githubusercontent.com/u/64068543?s=400&v=4"/> <meta name="twitter:title" content="Transformer XL"/> <meta name="twitter:description" content=""/> <meta name="twitter:site" content="@labmlai"/> <meta name="twitter:creator" content="@labmlai"/> <meta property="og:url" content="https://nn.labml.ai/transformers/xl/readme.html"/> <meta property="og:title" content="Transformer XL"/> <meta property="og:image" content="https://avatars1.githubusercontent.com/u/64068543?s=400&v=4"/> <meta property="og:site_name" content="LabML Neural Networks"/> <meta property="og:type" content="object"/> <meta property="og:title" content="Transformer XL"/> <meta property="og:description" content=""/> <title>Transformer XL</title> <link rel="shortcut icon" href="/icon.png"/> <link rel="stylesheet" href="../../pylit.css?v=1"> <link rel="canonical" href="https://nn.labml.ai/transformers/xl/readme.html"/> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.13.18/dist/katex.min.css" integrity="sha384-zTROYFVGOfTw7JV7KUu8udsvW2fx4lWOsCEDqhBreBwlHI4ioVRtmIvEThzJHGET" crossorigin="anonymous"> <!-- Global site tag (gtag.js) - Google Analytics --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-4V3HC8HBLH"></script> <script> window.dataLayer = window.dataLayer || []; function gtag() { dataLayer.push(arguments); } gtag('js', new Date()); gtag('config', 'G-4V3HC8HBLH'); </script> </head> <body> <div id='container'> <div id="background"></div> <div class='section'> <div class='docs'> <p> <a class="parent" href="/">home</a> <a class="parent" href="../index.html">transformers</a> <a class="parent" href="index.html">xl</a> </p> <p> <a href="https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/transformers/xl/readme.md"> <img alt="Github" src="https://img.shields.io/github/stars/labmlai/annotated_deep_learning_paper_implementations?style=social" style="max-width:100%;"/></a> <a href="https://twitter.com/labmlai" rel="nofollow"> <img alt="Twitter" src="https://img.shields.io/twitter/follow/labmlai?style=social" style="max-width:100%;"/></a> </p> </div> </div> <div class='section' id='section-0'> <div class='docs'> <div class='section-link'> <a href='#section-0'>#</a> </div> <h1><a href="https://nn.labml.ai/transformers/xl/index.html">Transformer XL</a></h1> <p>This is an implementation of <a href="https://papers.labml.ai/paper/1901.02860">Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</a> in <a href="https://pytorch.org">PyTorch</a>.</p> <p>Transformer has a limited attention span, equal to the length of the sequence trained in parallel. All these positions have a fixed positional encoding. Transformer XL increases this attention span by letting each of the positions pay attention to precalculated past embeddings. For instance if the context length is <span class="katex"><span aria-hidden="true" class="katex-html"><span class="base"><span class="strut" style="height:0.69444em;vertical-align:0em;"></span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span></span></span></span>, it will keep the embeddings of all layers for previous batch of length <span class="katex"><span aria-hidden="true" class="katex-html"><span class="base"><span class="strut" style="height:0.69444em;vertical-align:0em;"></span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span></span></span></span> and feed them to current step. If we use fixed-positional encodings these pre-calculated embeddings will have the same positions as the current context. They introduce relative positional encoding, where the positional encodings are introduced at the attention calculation.</p> <p>Annotated implementation of relative multi-headed attention is in <a href="https://nn.labml.ai/transformers/xl/relative_mha.html"><code class="highlight"><span></span><span class="n">relative_mha</span><span class="o">.</span><span class="n">py</span></code> </a>.</p> <p>Here's <a href="https://nn.labml.ai/transformers/xl/experiment.html">the training code</a> and a notebook for training a transformer XL model on Tiny Shakespeare dataset.</p> <p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/xl/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a> <a href="https://app.labml.ai/run/d3b6760c692e11ebb6a70242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen"></a> </p> </div> <div class='code'> </div> </div> <div class='footer'> <a href="https://papers.labml.ai">Trending Research Papers</a> <a href="https://labml.ai">labml.ai</a> </div> </div> <script src=../../interactive.js?v=1"></script> <script> function handleImages() { var images = document.querySelectorAll('p>img') for (var i = 0; i < images.length; ++i) { handleImage(images[i]) } } function handleImage(img) { img.parentElement.style.textAlign = 'center' var modal = document.createElement('div') modal.id = 'modal' var modalContent = document.createElement('div') modal.appendChild(modalContent) var modalImage = document.createElement('img') modalContent.appendChild(modalImage) var span = document.createElement('span') span.classList.add('close') span.textContent = 'x' modal.appendChild(span) img.onclick = function () { console.log('clicked') document.body.appendChild(modal) modalImage.src = img.src } span.onclick = function () { document.body.removeChild(modal) } } handleImages() </script> </body> </html>