SOTAVerified

HARIS: Human-Like Attention for Reference Image Segmentation

2024-05-17Unverified0· sign in to hype

Mengxi Zhang, Heqing Lian, Yiming Liu, Jie Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Referring image segmentation (RIS) aims to locate the particular region corresponding to the language expression. Existing methods incorporate features from different modalities in a bottom-up manner. This design may get some unnecessary image-text pairs, which leads to an inaccurate segmentation mask. In this paper, we propose a referring image segmentation method called HARIS, which introduces the Human-Like Attention mechanism and uses the parameter-efficient fine-tuning (PEFT) framework. To be specific, the Human-Like Attention gets a feedback signal from multi-modal features, which makes the network center on the specific objects and discard the irrelevant image-text pairs. Besides, we introduce the PEFT framework to preserve the zero-shot ability of pre-trained encoders. Extensive experiments on three widely used RIS benchmarks and the PhraseCut dataset demonstrate that our method achieves state-of-the-art performance and great zero-shot ability.

Tasks

Reproductions