SOTAVerified

Not All Neural Embeddings are Born Equal

2014-10-02Unverified0· sign in to hype

Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Neural language models learn word representations that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models. We show that translation-based embeddings outperform those learned by cutting-edge monolingual models at single-language tasks requiring knowledge of conceptual similarity and/or syntactic role. The findings suggest that, while monolingual models learn information about how concepts are related, neural-translation models better capture their true ontological status.

Tasks

Reproductions