SOTAVerified

Multimodal Pivots for Image Caption Translation

2016-01-15ACL 2016Unverified0· sign in to hype

Julian Hitschler, Shigehiko Schamoni, Stefan Riezler

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-of-the-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.

Tasks

Reproductions