SOTAVerified

Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction

2020-05-03ACL 2020Code Available1· sign in to hype

Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, Kentaro Inui

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC). The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC. For example, the distribution of the inputs to a GEC model can be considerably different (erroneous, clumsy, etc.) from that of the corpora used for pre-training MLMs; however, this issue is not addressed in the previous methods. Our experiments show that our proposed method, where we first fine-tune a MLM with a given GEC corpus and then use the output of the fine-tuned MLM as additional features in the GEC model, maximizes the benefit of the MLM. The best-performing model achieves state-of-the-art performances on the BEA-2019 and CoNLL-2014 benchmarks. Our code is publicly available at: https://github.com/kanekomasahiro/bert-gec.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BEA-2019 (test)Transformer + Pre-train with Pseudo Data (+BERT)F0.569.8Unverified
CoNLL-2014 Shared TaskTransformer + Pre-train with Pseudo Data (+BERT)F0.565.2Unverified
JFLEGTransformer + Pre-train with Pseudo Data + BERTGLEU62Unverified

Reproductions