SOTAVerified

Unsupervised Induction of Linguistic Categories with Records of Reading, Speaking, and Writing

2018-06-01NAACL 2018Unverified0· sign in to hype

Maria Barrett, Ana Valeria Gonz{\'a}lez-Gardu{\~n}o, Lea Frermann, Anders S{\o}gaard

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

When learning POS taggers and syntactic chunkers for low-resource languages, different resources may be available, and often all we have is a small tag dictionary, motivating type-constrained unsupervised induction. Even small dictionaries can improve the performance of unsupervised induction algorithms. This paper shows that performance can be further improved by including data that is readily available or can be easily obtained for most languages, i.e., eye-tracking, speech, or keystroke logs (or any combination thereof). We project information from all these data sources into shared spaces, in which the union of words is represented. For English unsupervised POS induction, the additional information, which is not required at test time, leads to an average error reduction on Ontonotes domains of 1.5\% over systems augmented with state-of-the-art word embeddings. On Penn Treebank the best model achieves 5.4\% error reduction over a word embeddings baseline. We also achieve significant improvements for syntactic chunk induction. Our analysis shows that improvements are even bigger when the available tag dictionaries are smaller.

Tasks

Reproductions