SOTAVerified

Domain Separation Graph Neural Networks for Saliency Object Ranking

2024-01-01CVPR 2024Code Available0· sign in to hype

Zijian Wu, Jun Lu, Jing Han, Lianfa Bai, Yi Zhang, Zhuang Zhao, Siyang Song

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Saliency object ranking (SOR) has attracted significant attention recently. Previous methods usually failed to explicitly explore the saliency degree-related relationships between objects. In this paper we propose a novel Domain Separation Graph Neural Network (DSGNN) which starts with separately extracting the shape and texture cues from each object and builds an shape graph as well as a texture graph for all objects in the given image. Then we propose a Shape-Texture Graph Domain Separation (STGDS) module to separate the task-relevant and irrelevant information of target objects by explicitly modelling the relationship between each pair of objects in terms of their shapes and textures respectively. Furthermore a Cross Image Graph Domain Separation (CIGDS) module is introduced to explore the saliency degree subspace that is robust to different scenes aiming to create a unified representation for targets with the same saliency levels in different images. Importantly our DSGNN automatically learns a multi-dimensional feature to represent each graph edge allowing complex diverse and ranking-related relationships to be modelled. Experimental results show that our DSGNN achieved the new state-of-the-art performance on both ASSR and IRSR datasets with large improvements of 5.2% and 4.1% SA-SOR respectively. Our code is provided in https://github.com/Wu-ZJ/DSGNN.

Tasks

Reproductions