SOTAVerified

Multilingual Models for Compositional Distributed Semantics

2014-04-17ACL 2014Code Available0· sign in to hype

Karl Moritz Hermann, Phil Blunsom

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Reuters RCV1/RCV2 English-to-GermanBi+Accuracy88.1Unverified
Reuters RCV1/RCV2 German-to-EnglishBi+Accuracy79.2Unverified

Reproductions