Files
2023-04-02 12:10:18 +05:30
..
2023-04-02 12:10:18 +05:30
2023-04-02 12:10:18 +05:30
2023-04-02 12:10:18 +05:30

Pay Attention to MLPs (gMLP)

This is a PyTorch implementation of the paper Pay Attention to MLPs.

This paper introduces a Multilayer Perceptron (MLP) based architecture with gating, which they name gMLP. It consists of a stack of L gMLP blocks.

Here is the training code for a gMLP model based autoregressive model.