SOTAVerified

Probing the Need for Visual Context in Multimodal Machine Translation

2019-03-20NAACL 2019Unverified0· sign in to hype

Ozan Caglayan, Pranava Madhyastha, Lucia Specia, Loïc Barrault

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Current work on multimodal machine translation (MMT) has suggested that the visual modality is either unnecessary or only marginally beneficial. We posit that this is a consequence of the very simple, short and repetitive sentences used in the only available dataset for the task (Multi30K), rendering the source text sufficient as context. In the general case, however, we believe that it is possible to combine visual and textual information in order to ground translations. In this paper we probe the contribution of the visual modality to state-of-the-art MMT models by conducting a systematic analysis where we partially deprive the models from source-side textual context. Our results show that under limited textual context, models are capable of leveraging the visual input to generate better translations. This contradicts the current belief that MMT models disregard the visual modality because of either the quality of the image features or the way they are integrated into the model.

Tasks

Reproductions