SOTAVerified

Annotating omission in statement pairs

2017-04-01WS 2017Code Available0· sign in to hype

H{\'e}ctor Mart{\'\i}nez Alonso, Amaury Delamaire, Beno{\^\i}t Sagot

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We focus on the identification of omission in statement pairs. We compare three annotation schemes, namely two different crowdsourcing schemes and manual expert annotation. We show that the simplest of the two crowdsourcing approaches yields a better annotation quality than the more complex one. We use a dedicated classifier to assess whether the annotators' behavior can be explained by straightforward linguistic features. The classifier benefits from a modeling that uses lexical information beyond length and overlap measures. However, for our task, we argue that expert and not crowdsourcing-based annotation is the best compromise between annotation cost and quality.

Tasks

Reproductions