SOTAVerified

Global Autoregressive Models for Data-Efficient Sequence Learning

2019-09-16CONLL 2019Code Available0· sign in to hype

Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Standard autoregressive seq2seq models are easily trained by max-likelihood, but tend to show poor results under small-data conditions. We introduce a class of seq2seq models, GAMs (Global Autoregressive Models), which combine an autoregressive component with a log-linear component, allowing the use of global a priori features to compensate for lack of data. We train these models in two steps. In the first step, we obtain an unnormalized GAM that maximizes the likelihood of the data, but is improper for fast inference or evaluation. In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the normalized distribution associated with the GAM, and can be used for fast inference and evaluation. Our experiments focus on language modelling under synthetic conditions and show a strong perplexity reduction of using the second autoregressive model over the standard one.

Tasks

Reproductions