SOTAVerified

GM-RKB WikiText Error Correction Task and Baselines

2020-05-01LREC 2020Code Available0· sign in to hype

Gabor Melli, Abdelrhman Eldallal, Bassim Lazem, Olga Moreira

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the GM-RKB WikiText Error Correction Task for the automatic detection and correction of typographical errors in WikiText annotated pages. The included corpus is based on a snapshot of the GM-RKB domain-specific semantic wiki consisting of a large collection of concepts, personages, and publications primary centered on data mining and machine learning research topics. Numerous Wikipedia pages were also included as additional training data in the task's evaluation process. The corpus was then automatically updated to synthetically include realistic errors to produce a training and evaluation ground truth comparison. We designed and evaluated two supervised baseline WikiFixer error correction methods: (1) a naive approach based on a maximum likelihood character-level language model; (2) and an advanced model based on a sequence-to-sequence (seq2seq) neural network architecture. Both error correction models operated at a character level. When compared against an off-the-shelf word-level spell checker these methods showed a significant improvement in the task's performance -- with the seq2seq-based model correcting a higher number of errors than it introduced. Finally, we published our data and code.

Tasks

Reproductions