SOTAVerified

A Low-Resource Approach to the Grammatical Error Correction of Ukrainian

2023-05-05EACL 2023Code Available0· sign in to hype

Frank Palma Gomez, Alla Rozovskaya, and Dan Roth

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present our system that participated in the shared task on the grammatical error correction of Ukrainian. We have implemented two approaches that make use of large pre-trained language models and synthetic data, that have been used for error correction of English as well as low-resource languages. The first approach is based on fine-tuning a large multilingual language model (mT5) in two stages: first, on synthetic data, and then on gold data. The second approach trains a (smaller) seq2seq Transformer model pre-trained on synthetic data and fine-tuned on gold data. Our mT5-based model scored first in “GEC only” track, and a very close second in the “GEC+Fluency” track. Our two key innovations are (1) finetuning in stages, first on synthetic, and then on gold data; and (2) a high-quality corruption method based on roundtrip machine translation to complement existing noisification approaches.

Tasks

Reproductions