SOTAVerified

DBpedia Abstracts: A Large-Scale, Open, Multilingual NLP Training Corpus

2016-05-01LREC 2016Unverified0· sign in to hype

Martin Br{\"u}mmer, Milan Dojchinovski, Sebastian Hellmann

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The ever increasing importance of machine learning in Natural Language Processing is accompanied by an equally increasing need in large-scale training and evaluation corpora. Due to its size, its openness and relative quality, the Wikipedia has already been a source of such data, but on a limited scale. This paper introduces the DBpedia Abstract Corpus, a large-scale, open corpus of annotated Wikipedia texts in six languages, featuring over 11 million texts and over 97 million entity links. The properties of the Wikipedia texts are being described, as well as the corpus creation process, its format and interesting use-cases, like Named Entity Linking training and evaluation.

Tasks

Reproductions