SOTAVerified

Towards Shared Datasets for Normalization Research

2014-05-01LREC 2014Unverified0· sign in to hype

Orph{\'e}e De Clercq, Sarah Schulz, Bart Desmet, V{\'e}ronique Hoste

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we present a Dutch and English dataset that can serve as a gold standard for evaluating text normalization approaches. With the combination of text messages, message board posts and tweets, these datasets represent a variety of user generated content. All data was manually normalized to their standard form using newly-developed guidelines. We perform automatic lexical normalization experiments on these datasets using statistical machine translation techniques. We focus on both the word and character level and find that we can improve the BLEU score with ca. 20\% for both languages. In order for this user generated content data to be released publicly to the research community some issues first need to be resolved. These are discussed in closer detail by focussing on the current legislation and by investigating previous similar data collection projects. With this discussion we hope to shed some light on various difficulties researchers are facing when trying to share social media data.

Tasks

Reproductions