SOTAVerified

DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation

2025-04-07CVPR 2025Code Available3· sign in to hype

Bo-Wen Yin, Jiao-Long Cao, Ming-Ming Cheng, Qibin Hou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advances in scene understanding benefit a lot from depth maps because of the 3D geometry information, especially in complex conditions (e.g., low light and overexposed). Existing approaches encode depth maps along with RGB images and perform feature fusion between them to enable more robust predictions. Taking into account that depth can be regarded as a geometry supplement for RGB images, a straightforward question arises: Do we really need to explicitly encode depth information with neural networks as done for RGB images? Based on this insight, in this paper, we investigate a new way to learn RGBD feature representations and present DFormerv2, a strong RGBD encoder that explicitly uses depth maps as geometry priors rather than encoding depth information with neural networks. Our goal is to extract the geometry clues from the depth and spatial distances among all the image patch tokens, which will then be used as geometry priors to allocate attention weights in self-attention. Extensive experiments demonstrate that DFormerv2 exhibits exceptional performance in various RGBD semantic segmentation benchmarks. Code is available at: https://github.com/VCIP-RGBD/DFormer.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
NYU-Depth V2DFormerv2-LMean IoU58.4Unverified
NYU-Depth V2DFormerv2-BMean IoU57.7Unverified
NYU-Depth V2DFormerv2-SMean IoU56Unverified
SUN-RGBDDFormerv2-LMean IoU53.3Unverified
SUN-RGBDDFormerv2-BMean IoU52.8Unverified
SUN-RGBDDFormerv2-SMean IoU51.5Unverified

Reproductions