SOTAVerified

Lexical Substitution Dataset for German

2014-05-01LREC 2014Unverified0· sign in to hype

Kostadin Cholakov, Chris Biemann, Judith Eckle-Kohler, Iryna Gurevych

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This article describes a lexical substitution dataset for German. The whole dataset contains 2,040 sentences from the German Wikipedia, with one target word in each sentence. There are 51 target nouns, 51 adjectives, and 51 verbs randomly selected from 3 frequency groups based on the lemma frequency list of the German WaCKy corpus. 200 sentences have been annotated by 4 professional annotators and the remaining sentences by 1 professional annotator and 5 additional annotators who have been recruited via crowdsourcing. The resulting dataset can be used to evaluate not only lexical substitution systems, but also different sense inventories and word sense disambiguation systems.

Tasks

Reproductions