SOTAVerified

Lexical Chains meet Word Embeddings in Document-level Statistical Machine Translation

2017-09-01WS 2017Unverified0· sign in to hype

Laura Mascarell

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Currently under review for EMNLP 2017 The phrase-based Statistical Machine Translation (SMT) approach deals with sentences in isolation, making it difficult to consider discourse context in translation. This poses a challenge for ambiguous words that need discourse knowledge to be correctly translated. We propose a method that benefits from the semantic similarity in lexical chains to improve SMT output by integrating it in a document-level decoder. We focus on word embeddings to deal with the lexical chains, contrary to the traditional approach that uses lexical resources. Experimental results on German-to-English show that our method produces correct translations in up to 88\% of the changes, improving the translation in 36\%-48\% of them over the baseline.

Tasks

Reproductions