SOTAVerified

Comprehensive Multi-Modal Interactions for Referring Image Segmentation

2021-04-21Findings (ACL) 2022Code Available0· sign in to hype

Kanishk Jain, Vineet Gandhi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
RefCOCOg-valSHNetOverall IoU49.9Unverified
RefCOCO testASHNetOverall IoU58.46Unverified
RefCOCO+ test BSHNetOverall IoU44.12Unverified
RefCoCo valSHNetOverall IoU52.75Unverified
RefCoCo valSHNetOverall IoU65.32Unverified
ReferItSHNetOverall IoU69.19Unverified

Reproductions