SOTAVerified

Adapting Sequence Models for Sentence Correction

2017-07-27EMNLP 2017Code Available0· sign in to hype

Allen Schmaltz, Yoon Kim, Alexander M. Rush, Stuart M. Shieber

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In a controlled experiment of sequence-to-sequence approaches for the task of sentence correction, we find that character-based models are generally more effective than word-based models and models that encode subword information via convolutions, and that modeling the output data as a series of diffs improves effectiveness over standard approaches. Our strongest sequence-to-sequence model improves over our strongest phrase-based statistical machine translation model, with access to the same data, by 6 M2 (0.5 GLEU) points. Additionally, in the data environment of the standard CoNLL-2014 setup, we demonstrate that modeling (and tuning against) diffs yields similar or better M2 scores with simpler models and/or significantly less data than previous sequence-to-sequence approaches.

Tasks

Reproductions