SOTAVerified

ViTA: Visual-Linguistic Translation by Aligning Object Tags

2021-06-01Workshop on Asian Translation 2021Code Available0· sign in to hype

Kshitij Gupta, Devansh Gautam, Radhika Mamidi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multimodal Machine Translation (MMT) enriches the source text with visual information for translation. It has gained popularity in recent years, and several pipelines have been proposed in the same direction. Yet, the task lacks quality datasets to illustrate the contribution of visual modality in the translation systems. In this paper, we propose our system under the team name Volta for the Multimodal Translation Task of WAT 2021 from English to Hindi. We also participate in the textual-only subtask of the same language pair for which we use mBART, a pretrained multilingual sequence-to-sequence model. For multimodal translation, we propose to enhance the textual input by bringing the visual information to a textual domain by extracting object tags from the image. We also explore the robustness of our system by systematically degrading the source text. Finally, we achieve a BLEU score of 44.6 and 51.6 on the test set and challenge set of the multimodal task.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Hindi Visual Genome (Challenge Set)ViTABLEU (EN-HI)51.6Unverified
Hindi Visual Genome (Test Set)ViTABLEU (EN-HI)44.6Unverified

Reproductions