SOTAVerified

Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets

2017-09-27TACL 2017Code Available0· sign in to hype

Rotem Dror, Gili Baumer, Marina Bogomolov, Roi Reichart

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

With the ever-growing amounts of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper, we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction.

Tasks

Reproductions