SOTAVerified

An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction

2019-09-02IJCNLP 2019Code Available0· sign in to hype

Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set (F_0.5=65.0) and the official test set of the BEA-2019 shared task (F_0.5=70.2) without making any modifications to the model architecture.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BEA-2019 (test)Transformer + Pre-train with Pseudo DataF0.570.2Unverified
CoNLL-2014 Shared TaskTransformer + Pre-train with Pseudo DataF0.565Unverified

Reproductions