SOTAVerified

Continuous multilinguality with language vectors

2017-04-01EACL 2017Unverified0· sign in to hype

Robert {\"O}stling, J{\"o}rg Tiedemann

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.

Tasks

Reproductions