mirror of
				https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
				synced 2025-10-31 10:48:49 +08:00 
			
		
		
		
	zh
This commit is contained in:
		| @ -1,5 +1,5 @@ | ||||
| { | ||||
|  "<h1>Transformer Auto-Regression Experiment</h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/basic/autoregressive_experiment.ipynb\"><span translate=no>_^_0_^_</span></a> <a href=\"https://comet.ml/labml/transformer/ea8c108c2d94434ca3c2bc2b21015082\"><span translate=no>_^_1_^_</span></a></p>\n<p>This trains a simple transformer introduced in <a href=\"https://papers.labml.ai/paper/1706.03762\">Attention Is All You Need</a> on an NLP auto-regression task (with Tiny Shakespeare dataset).</p>\n": "<h1>\u53d8\u538b\u5668\u81ea\u52a8\u56de\u5f52\u5b9e\u9a8c</h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/basic/autoregressive_experiment.ipynb\"><span translate=no>_^_0_^_</span></a><a href=\"https://comet.ml/labml/transformer/ea8c108c2d94434ca3c2bc2b21015082\"><span translate=no>_^_1_^_</span></a></p>\n<p>\u8fd9\u4f1a\u5728 NLP \u81ea\u52a8\u56de\u5f52\u4efb\u52a1\uff08\u4f7f\u7528 Tiny Shakespeare \u6570\u636e\u96c6\uff09\u4e2d\u8bad\u7ec3\u5728 \u201c<a href=\"https://papers.labml.ai/paper/1706.03762\">\u6ce8\u610f\u5c31\u662f\u4f60\u6240\u9700\u8981\u7684\u4e00\u5207</a>\u201d \u4e2d\u5f15\u5165\u7684\u7b80\u5355\u8f6c\u6362\u5668\u3002</p>\n", | ||||
|  "<h1>Transformer Auto-Regression Experiment</h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/basic/autoregressive_experiment.ipynb\"><span translate=no>_^_0_^_</span></a></p>\n<p>This trains a simple transformer introduced in <a href=\"https://papers.labml.ai/paper/1706.03762\">Attention Is All You Need</a> on an NLP auto-regression task (with Tiny Shakespeare dataset).</p>\n": "<h1>\u53d8\u538b\u5668\u81ea\u52a8\u56de\u5f52\u5b9e\u9a8c</h1>\n<p><a href=\"https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/basic/autoregressive_experiment.ipynb\"><span translate=no>_^_0_^_</span></a></p>\n<p>\u8fd9\u5c06\u8bad\u7ec3\u4e00\u4e2a\u5728 NLP \u81ea\u52a8\u56de\u5f52\u4efb\u52a1\uff08\u4f7f\u7528 Tiny Shakespeare \u6570\u636e\u96c6\uff09\u4e2d\u5f15\u5165\u7684 \u201c<a href=\"https://papers.labml.ai/paper/1706.03762\">\u6ce8\u610f\u529b\u5c31\u662f\u4f60\u6240\u9700\u8981</a>\u7684\u201d \u7b80\u5355\u53d8\u538b\u5668\u3002</p>\n", | ||||
|  "<h2>Auto-Regressive model</h2>\n": "<h2>\u81ea\u56de\u5f52\u6a21\u578b</h2>\n", | ||||
|  "<h2>Configurations</h2>\n<p>This inherits from <a href=\"../../experiments/nlp_autoregression.html#NLPAutoRegressionConfigs\"><span translate=no>_^_0_^_</span></a></p>\n": "<h2>\u914d\u7f6e</h2>\n<p>\u8fd9\u7ee7\u627f\u81ea <a href=\"../../experiments/nlp_autoregression.html#NLPAutoRegressionConfigs\"><span translate=no>_^_0_^_</span></a></p>\n", | ||||
|  "<h3>Transformer configurations</h3>\n": "<h3>\u53d8\u538b\u5668\u914d\u7f6e</h3>\n", | ||||
|  | ||||
		Reference in New Issue
	
	Block a user
	 Varuna Jayasiri
					Varuna Jayasiri