| Mega: Moving Average Equipped Gated Attention | Sep 21, 2022 | Image ClassificationInductive Bias | CodeCode Available | 2 |
| Simplified State Space Layers for Sequence Modeling | Aug 9, 2022 | Computational EfficiencyListOps | CodeCode Available | 2 |
| Cached Transformers: Improving Transformers with Differentiable Memory Cache | Dec 20, 2023 | image-classificationImage Classification | CodeCode Available | 1 |
| Sequence Modeling with Multiresolution Convolutional Memory | May 2, 2023 | Density EstimationListOps | CodeCode Available | 1 |
| Training Discrete Deep Generative Models via Gapped Straight-Through Estimator | Jun 15, 2022 | ListOpsreinforcement-learning | CodeCode Available | 1 |
| Dynamic Token Normalization Improves Vision Transformers | Dec 5, 2021 | Inductive BiasListOps | CodeCode Available | 1 |
| Efficiently Modeling Long Sequences with Structured State Spaces | Oct 31, 2021 | Data AugmentationLanguage Modeling | CodeCode Available | 1 |
| The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization | Oct 14, 2021 | ListOpsSystematic Generalization | CodeCode Available | 1 |
| Going Beyond Linear Transformers with Recurrent Fast Weight Programmers | Jun 11, 2021 | Atari GamesListOps | CodeCode Available | 1 |
| Modeling Hierarchical Structures with Continuous Recursive Neural Networks | Jun 10, 2021 | ListOpsNatural Language Inference | CodeCode Available | 1 |