Files
Varuna Jayasiri 8d1b6ba010 translations
2022-08-30 16:28:56 +05:30
..
2022-08-30 16:28:56 +05:30
2022-08-30 16:28:56 +05:30
2022-08-30 16:28:56 +05:30
2022-08-30 16:28:56 +05:30
2022-08-30 16:28:56 +05:30
2022-08-30 16:28:56 +05:30

{
 "<h1><a href=\"https://nn.labml.ai/transformers/switch/index.html\">Switch Transformer</a></h1>\n<p>This is a miniature <a href=\"https://pytorch.org\">PyTorch</a> implementation of the paper <a href=\"https://papers.labml.ai/paper/2101.03961\">Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity</a>. Our implementation only has a few million parameters and doesn&#x27;t do model parallel distributed training. It does single GPU training, but we implement the concept of switching as described in the paper.</p>\n<p>The Switch Transformer uses different parameters for each token by switching among parameters based on the token. Therefore, only a fraction of parameters are chosen for each token. So you can have more parameters but less computational cost.</p>\n<p>The switching happens at the Position-wise Feedforward network (FFN) of each transformer block. Position-wise feedforward network consists of two sequentially fully connected layers. In switch transformer we have multiple FFNs (multiple experts), and we chose which one to use based on a router. The output is a set of probabilities for picking a FFN, and we pick the one with the highest probability and only evaluate that. So essentially the computational cost is the same as having a single FFN. In our implementation this doesn&#x27;t parallelize well when you have many or large FFNs since it&#x27;s all happening on a single GPU. In a distributed setup you would have each FFN (each very large) on a different device.</p>\n<p>The paper introduces another loss term to balance load among the experts (FFNs) and discusses dropping tokens when routing is not balanced.</p>\n<p>Here&#x27;s <a href=\"experiment.html\">the training code</a> and a notebook for training a switch transformer on Tiny Shakespeare dataset.</p>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/switch/experiment.ipynb\"><span translate=no>_^_0_^_</span></a> <a href=\"https://app.labml.ai/run/353770ce177c11ecaa5fb74452424f46\"><span translate=no>_^_1_^_</span></a> </p>\n": "<h1><a href=\"https://nn.labml.ai/transformers/switch/index.html\">\u5f00\u5173\u53d8\u538b\u5668</a></h1>\n<p>\u8fd9\u662f\u4e00\u7bc7\u8bba\u6587\u300a<a href=\"https://papers.labml.ai/paper/2101.03961\">\u5f00\u5173\u53d8\u538b\u5668\uff1a\u4ee5\u7b80\u5355\u9ad8\u6548\u7684\u7a00\u758f\u5ea6\u7f29\u653e\u5230\u4e07\u4ebf\u53c2\u6570\u6a21\u578b\u300b\u7684</a>\u5fae\u578b <a href=\"https://pytorch.org\">PyTor</a> ch \u5b9e\u73b0\u3002\u6211\u4eec\u7684\u5b9e\u73b0\u53ea\u6709\u51e0\u767e\u4e07\u4e2a\u53c2\u6570\uff0c\u4e0d\u5bf9\u5e76\u884c\u5206\u5e03\u5f0f\u8bad\u7ec3\u8fdb\u884c\u5efa\u6a21\u3002\u5b83\u8fdb\u884c\u5355\u4e2a GPU \u8bad\u7ec3\uff0c\u4f46\u6211\u4eec\u5b9e\u73b0\u4e86\u767d\u76ae\u4e66\u4e2d\u63cf\u8ff0\u7684\u5207\u6362\u6982\u5ff5\u3002</p>\n<p>\u5207\u6362\u8f6c\u6362\u5668\u901a\u8fc7\u6839\u636e\u4ee4\u724c\u5728\u53c2\u6570\u4e4b\u95f4\u5207\u6362\uff0c\u4e3a\u6bcf\u4e2a\u4ee4\u724c\u4f7f\u7528\u4e0d\u540c\u7684\u53c2\u6570\u3002\u56e0\u6b64\uff0c\u4ec5\u4e3a\u6bcf\u4e2a\u4ee4\u724c\u9009\u62e9\u4e00\u5c0f\u90e8\u5206\u53c2\u6570\u3002\u56e0\u6b64\uff0c\u4f60\u53ef\u4ee5\u6709\u66f4\u591a\u7684\u53c2\u6570\uff0c\u4f46\u8ba1\u7b97\u6210\u672c\u66f4\u4f4e\u3002</p>\n<p>\u5207\u6362\u53d1\u751f\u5728\u6bcf\u4e2a\u53d8\u538b\u5668\u6a21\u5757\u7684\u4f4d\u7f6e\u524d\u9988\u7f51\u7edc (FFN) \u4e0a\u3002\u4f4d\u7f6e\u524d\u9988\u7f51\u7edc\u7531\u4e24\u4e2a\u4f9d\u6b21\u5b8c\u5168\u8fde\u63a5\u7684\u5c42\u7ec4\u6210\u3002\u5728\u5f00\u5173\u53d8\u538b\u5668\u4e2d\uff0c\u6211\u4eec\u6709\u591a\u4e2aFFN\uff08\u591a\u4e2a\u4e13\u5bb6\uff09\uff0c\u6211\u4eec\u6839\u636e\u8def\u7531\u5668\u9009\u62e9\u4f7f\u7528\u54ea\u4e00\u4e2a\u3002\u8f93\u51fa\u662f\u4e00\u7ec4\u9009\u62e9 FFN \u7684\u6982\u7387\uff0c\u6211\u4eec\u9009\u62e9\u6982\u7387\u6700\u9ad8\u7684\u90a3\u4e2a\uff0c\u7136\u540e\u53ea\u5bf9\u5176\u8fdb\u884c\u8bc4\u4f30\u3002\u56e0\u6b64\uff0c\u4ece\u672c\u8d28\u4e0a\u8bb2\uff0c\u8ba1\u7b97\u6210\u672c\u4e0e\u62e5\u6709\u5355\u4e2a FFN \u76f8\u540c\u3002\u5728\u6211\u4eec\u7684\u5b9e\u73b0\u4e2d\uff0c\u5f53\u4f60\u6709\u8bb8\u591a\u6216\u5927\u578b FFN \u65f6\uff0c\u8fd9\u5e76\u4e0d\u80fd\u5f88\u597d\u5730\u5e76\u884c\uff0c\u56e0\u4e3a\u5b83\u4eec\u90fd\u53d1\u751f\u5728\u5355\u4e2a GPU \u4e0a\u3002\u5728\u5206\u5e03\u5f0f\u8bbe\u7f6e\u4e2d\uff0c\u6bcf\u4e2a FFN\uff08\u6bcf\u4e2a\u975e\u5e38\u5927\uff09\u90fd\u4f4d\u4e8e\u4e0d\u540c\u7684\u8bbe\u5907\u4e0a\u3002</p>\n<p>\u672c\u6587\u4ecb\u7ecd\u4e86\u53e6\u4e00\u4e2a\u635f\u5931\u672f\u8bed\u6765\u5e73\u8861\u4e13\u5bb6\uff08FFN\uff09\u4e4b\u95f4\u7684\u8d1f\u8f7d\uff0c\u5e76\u8ba8\u8bba\u4e86\u5728\u8def\u7531\u4e0d\u5e73\u8861\u65f6\u4e22\u5f03\u4ee4\u724c\u3002</p>\n<p><a href=\"experiment.html\">\u4ee5\u4e0b\u662f\u5728 Tiny Shakespeare \u6570\u636e\u96c6\u4e2d\u8bad\u7ec3\u5f00\u5173\u53d8\u538b\u5668\u7684\u8bad\u7ec3\u4ee3\u7801</a>\u548c\u7b14\u8bb0\u672c\u3002</p>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/switch/experiment.ipynb\"><span translate=no>_^_0_^_</span></a><a href=\"https://app.labml.ai/run/353770ce177c11ecaa5fb74452424f46\"><span translate=no>_^_1_^_</span></a></p>\n",
 "Switch Transformer": "\u5f00\u5173\u53d8\u538b\u5668"
}