SOTAVerified

ST-MoE: Designing Stable and Transferable Sparse Expert Models

2022-02-17Code Available3· sign in to hype

Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, William Fedus

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Scale has opened new frontiers in natural language processing -- but at a high cost. In response, Mixture-of-Experts (MoE) and Switch Transformers have been proposed as an energy efficient path to even larger and more capable language models. But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning. Our work focuses on these issues and acts as a design guide. We conclude by scaling a sparse model to 269B parameters, with a computational cost comparable to a 32B dense encoder-decoder Transformer (Stable and Transferable Mixture-of-Experts or ST-MoE-32B). For the first time, a sparse model achieves state-of-the-art performance in transfer learning, across a diverse set of tasks including reasoning (SuperGLUE, ARC Easy, ARC Challenge), summarization (XSum, CNN-DM), closed book question answering (WebQA, Natural Questions), and adversarially constructed tasks (Winogrande, ANLI R3).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
arc_challengeST-MoE-32B 269B (fine-tuned)Accuracy86.5Unverified
arc_challengeST-MoE-L 4.1B (fine-tuned)Accuracy56.9Unverified
arc_easyST-MoE-L 4.1B (fine-tuned)Accuracy75.4Unverified
arc_easyST-MoE-32B 269B (fine-tuned)Accuracy95.2Unverified
ReCoRDST-MoE-L 4.1B (fine-tuned)EM88.9Unverified
ReCoRDST-MoE-32B 269B (fine-tuned)EM95.1Unverified
WinoGrandeST-MoE-32B 269B (fine-tuned)Accuracy96.1Unverified
WinoGrandeST-MoE-L 4.1B (fine-tuned)Accuracy81.7Unverified

Reproductions