SOTAVerified

Semi-supervised multimodal coreference resolution in image narrations

2023-10-20Code Available0· sign in to hype

Arushi Goel, Basura Fernando, Frank Keller, Hakan Bilen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we study multimodal coreference resolution, specifically where a longer descriptive text, i.e., a narration is paired with an image. This poses significant challenges due to fine-grained image-text alignment, inherent ambiguity present in narrative language, and unavailability of large annotated training sets. To tackle these challenges, we present a data efficient semi-supervised approach that utilizes image-narration pairs to resolve coreferences and narrative grounding in a multimodal context. Our approach incorporates losses for both labeled and unlabeled data within a cross-modal framework. Our evaluation shows that the proposed approach outperforms strong baselines both quantitatively and qualitatively, for the tasks of coreference resolution and narrative grounding.

Tasks

Reproductions