SOTAVerified

Semantic-Preserving Augmentation for Robust Image-Text Retrieval

2023-03-10Code Available0· sign in to hype

Sunwoo Kim, Kyuhong Shim, Luong Trung Nguyen, Byonghyo Shim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision quality substantially. In this paper, we propose a novel image text retrieval technique, referred to as robust visual semantic embedding (RVSE), which consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT). Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors regardless of the corruption, improving the model robustness significantly. From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.

Tasks

Reproductions