UniVSE: Robust Visual Semantic Embeddings via Structured Semantic Representations
Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei LI, Weiwei Sun, Wei-Ying Ma
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/salanueva/UniVSEpytorch★ 10
Abstract
We propose Unified Visual-Semantic Embeddings (UniVSE) for learning a joint space of visual and textual concepts. The space unifies the concepts at different levels, including objects, attributes, relations, and full scenes. A contrastive learning approach is proposed for the fine-grained alignment from only image-caption pairs. Moreover, we present an effective approach for enforcing the coverage of semantic components that appear in the sentence. We demonstrate the robustness of Unified VSE in defending text-domain adversarial attacks on cross-modal retrieval tasks. Such robustness also empowers the use of visual cues to resolve word dependencies in novel sentences.