SOTAVerified

Cloze-driven Pretraining of Self-attention Networks

2019-03-19IJCNLP 2019Unverified0· sign in to hype

Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn TreebankCNN Large + fine-tuneF1 score95.6Unverified

Reproductions