Files
Varuna Jayasiri f00ba4a70f paper url fix
2024-06-21 19:01:16 +05:30
..
2024-06-21 19:01:16 +05:30
2024-06-21 19:01:16 +05:30
2024-06-21 19:01:16 +05:30
2023-05-10 17:00:29 -04:00
2022-08-30 16:28:56 +05:30
2023-04-02 14:23:40 +05:30
2024-06-21 19:01:16 +05:30
2024-06-21 19:01:16 +05:30
2024-06-21 19:01:16 +05:30

{
 "<h1><a href=\"https://nn.labml.ai/transformers/switch/index.html\">Switch Transformer</a></h1>\n<p>This is a miniature <a href=\"https://pytorch.org\">PyTorch</a> implementation of the paper <a href=\"https://arxiv.org/abs/2101.03961\">Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity</a>. Our implementation only has a few million parameters and doesn&#x27;t do model parallel distributed training. It does single GPU training, but we implement the concept of switching as described in the paper.</p>\n<p>The Switch Transformer uses different parameters for each token by switching among parameters based on the token. Therefore, only a fraction of parameters are chosen for each token. So you can have more parameters but less computational cost.</p>\n<p>The switching happens at the Position-wise Feedforward network (FFN) of each transformer block. Position-wise feedforward network consists of two sequentially fully connected layers. In switch transformer we have multiple FFNs (multiple experts), and we chose which one to use based on a router. The output is a set of probabilities for picking a FFN, and we pick the one with the highest probability and only evaluate that. So essentially the computational cost is the same as having a single FFN. In our implementation this doesn&#x27;t parallelize well when you have many or large FFNs since it&#x27;s all happening on a single GPU. In a distributed setup you would have each FFN (each very large) on a different device.</p>\n<p>The paper introduces another loss term to balance load among the experts (FFNs) and discusses dropping tokens when routing is not balanced.</p>\n<p>Here&#x27;s <a href=\"experiment.html\">the training code</a> and a notebook for training a switch transformer on Tiny Shakespeare dataset. </p>\n": "<h1><a href=\"https://nn.labml.ai/transformers/switch/index.html\">\u5f00\u5173\u53d8\u538b\u5668</a></h1>\n<p>\u8fd9\u662f\u7eb8\u8d28\u300a<a href=\"https://arxiv.org/abs/2101.03961\">\u5f00\u5173\u53d8\u5f62\u91d1\u521a\uff1a\u4ee5\u7b80\u5355\u9ad8\u6548\u7684\u7a00\u758f\u5ea6\u6269\u5c55\u5230\u4e07\u4ebf\u4e2a\u53c2\u6570\u6a21\u578b\u300b\u7684</a>\u5fae\u578b <a href=\"https://pytorch.org\">PyTorch</a> \u5b9e\u73b0\u3002\u6211\u4eec\u7684\u5b9e\u73b0\u53ea\u6709\u51e0\u767e\u4e07\u4e2a\u53c2\u6570\uff0c\u4e0d\u5bf9\u5e76\u884c\u5206\u5e03\u5f0f\u8bad\u7ec3\u8fdb\u884c\u5efa\u6a21\u3002\u5b83\u8fdb\u884c\u5355\u4e2a GPU \u8bad\u7ec3\uff0c\u4f46\u6211\u4eec\u5b9e\u73b0\u4e86\u8bba\u6587\u4e2d\u63cf\u8ff0\u7684\u5207\u6362\u6982\u5ff5\u3002</p>\n<p>Switch Transformer \u901a\u8fc7\u6839\u636e\u4ee4\u724c\u5728\u53c2\u6570\u4e4b\u95f4\u5207\u6362\uff0c\u4e3a\u6bcf\u4e2a\u4ee4\u724c\u4f7f\u7528\u4e0d\u540c\u7684\u53c2\u6570\u3002\u56e0\u6b64\uff0c\u53ea\u4e3a\u6bcf\u4e2a\u4ee3\u5e01\u9009\u62e9\u4e86\u4e00\u5c0f\u90e8\u5206\u53c2\u6570\u3002\u56e0\u6b64\uff0c\u60a8\u53ef\u4ee5\u62e5\u6709\u66f4\u591a\u53c2\u6570\uff0c\u4f46\u8ba1\u7b97\u6210\u672c\u66f4\u4f4e\u3002</p>\n<p>\u5207\u6362\u53d1\u751f\u5728\u6bcf\u4e2a\u53d8\u538b\u5668\u6a21\u5757\u7684\u4f4d\u7f6e\u524d\u9988\u7f51\u7edc (FFN) \u4e0a\u3002\u4f4d\u7f6e\u524d\u9988\u7f51\u7edc\u7531\u4e24\u4e2a\u6309\u987a\u5e8f\u5b8c\u5168\u8fde\u63a5\u7684\u5c42\u7ec4\u6210\u3002\u5728\u4ea4\u6362\u673a\u53d8\u538b\u5668\u4e2d\uff0c\u6211\u4eec\u6709\u591a\u4e2a FFN\uff08\u591a\u4f4d\u4e13\u5bb6\uff09\uff0c\u6211\u4eec\u6839\u636e\u8def\u7531\u5668\u9009\u62e9\u4f7f\u7528\u54ea\u4e00\u4e2a\u3002\u8f93\u51fa\u662f\u4e00\u7ec4\u7528\u4e8e\u9009\u62e9 FFN \u7684\u6982\u7387\uff0c\u6211\u4eec\u9009\u62e9\u6982\u7387\u6700\u9ad8\u7684\u6982\u7387\uff0c\u7136\u540e\u4ec5\u5bf9\u5176\u8fdb\u884c\u8bc4\u4f30\u3002\u56e0\u6b64\uff0c\u4ece\u672c\u8d28\u4e0a\u8bb2\uff0c\u8ba1\u7b97\u6210\u672c\u4e0e\u62e5\u6709\u5355\u4e2a FFN \u76f8\u540c\u3002\u5728\u6211\u4eec\u7684\u5b9e\u73b0\u4e2d\uff0c\u5f53\u4f60\u6709\u8bb8\u591a\u6216\u5927\u578b FFN \u65f6\uff0c\u8fd9\u79cd\u5e76\u884c\u5316\u6548\u679c\u4e0d\u4f73\uff0c\u56e0\u4e3a\u8fd9\u4e00\u5207\u90fd\u53d1\u751f\u5728\u5355\u4e2a GPU \u4e0a\u3002\u5728\u5206\u5e03\u5f0f\u8bbe\u7f6e\u4e2d\uff0c\u4f60\u4f1a\u5c06\u6bcf\u4e2a FFN\uff08\u6bcf\u4e2a\u90fd\u5f88\u5927\uff09\u653e\u5728\u4e0d\u540c\u7684\u8bbe\u5907\u4e0a\u3002</p>\n<p>\u672c\u6587\u5f15\u5165\u4e86\u53e6\u4e00\u4e2a\u635f\u5931\u672f\u8bed\u6765\u5e73\u8861\u4e13\u5bb6\uff08FFN\uff09\u4e4b\u95f4\u7684\u8d1f\u8f7d\uff0c\u5e76\u8ba8\u8bba\u4e86\u8def\u7531\u4e0d\u5e73\u8861\u65f6\u4e22\u5f03\u4ee3\u5e01\u7684\u95ee\u9898\u3002</p>\n<p>\u8fd9\u662f<a href=\"experiment.html\">\u8bad\u7ec3\u4ee3\u7801\u548c\u4e00\u672c</a>\u7528\u4e8e\u5728 Tiny Shakespeare \u6570\u636e\u96c6\u4e0a\u8bad\u7ec3\u5f00\u5173\u53d8\u538b\u5668\u7684\u7b14\u8bb0\u672c\u3002</p>\n",
 "Switch Transformer": "\u5f00\u5173\u53d8\u538b\u5668"
}