SOTAVerified

The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction

2019-06-04WS 2019Code Available0· sign in to hype

Dimitrios Alikaniotis, Vipul Raheja

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work on Grammatical Error Correction (GEC) has highlighted the importance of language modeling in that it is certainly possible to achieve good performance by comparing the probabilities of the proposed edits. At the same time, advancements in language modeling have managed to generate linguistic output, which is almost indistinguishable from that of human-generated text. In this paper, we up the ante by exploring the potential of more sophisticated language models in GEC and offer some key insights on their strengths and weaknesses. We show that, in line with recent results in other NLP tasks, Transformer architectures achieve consistently high performance and provide a competitive baseline for future machine learning models.

Tasks

Reproductions