SOTAVerified

Advancing State of the Art in Language Modeling

2023-11-28Code Available0· sign in to hype

David Herel, Tomas Mikolov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generalization is arguably the most important goal of statistical language modeling research. Publicly available benchmarks and papers published with an open-source code have been critical to advancing the field. However, it is often very difficult, and sometimes even impossible, to reproduce the results fully as reported in publications. In this paper, we propose a simple framework that should help advance the state of the art in language modeling in terms of generalization. We propose to publish not just the code, but also probabilities on dev and test sets with future publications so that one can easily add the new model into an ensemble. This has crucial advantages: it is much easier to determine whether a newly proposed model is actually complementary to the current baseline. Therefore, instead of inventing new names for the old tricks, the scientific community can advance faster. Finally, this approach promotes diversity of ideas: one does not need to create an individual model that is the new state of the art to attract attention; it will be sufficient to develop a new model that learns patterns which other models do not. Thus, even a suboptimal model can be found to have value. Remarkably, our approach has yielded new state-of-the-art results across various language modeling benchmarks up to 10%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn Treebank (Word Level)Ensemble of AllTest perplexity47.31Unverified
WikiText-103Ensemble of AllTest perplexity13.29Unverified
WikiText-2Ensemble of AllTest perplexity53.73Unverified

Reproductions