SOTAVerified

ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation

2021-08-24ICCV 2021Code Available1· sign in to hype

Jinming Cao, Hanchao Leng, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

RGB-D semantic segmentation has attracted increasing attention over the past few years. Existing methods mostly employ homogeneous convolution operators to consume the RGB and depth features, ignoring their intrinsic differences. In fact, the RGB values capture the photometric appearance properties in the projected image space, while the depth feature encodes both the shape of a local geometry as well as the base (whereabout) of it in a larger context. Compared with the base, the shape probably is more inherent and has a stronger connection to the semantics, and thus is more critical for segmentation accuracy. Inspired by this observation, we introduce a Shape-aware Convolutional layer (ShapeConv) for processing the depth feature, where the depth feature is firstly decomposed into a shape-component and a base-component, next two learnable weights are introduced to cooperate with them independently, and finally a convolution is applied on the re-weighted combination of these two components. ShapeConv is model-agnostic and can be easily integrated into most CNNs to replace vanilla convolutional layers for semantic segmentation. Extensive experiments on three challenging indoor RGB-D semantic segmentation benchmarks, i.e., NYU-Dv2(-13,-40), SUN RGB-D, and SID, demonstrate the effectiveness of our ShapeConv when employing it over five popular architectures. Moreover, the performance of CNNs with ShapeConv is boosted without introducing any computation and memory increase in the inference phase. The reason is that the learnt weights for balancing the importance between the shape and base components in ShapeConv become constants in the inference phase, and thus can be fused into the following convolution, resulting in a network that is identical to one with vanilla convolutional layers.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GAMUSShapeConvmIoU55.86Unverified
LLRGBD-syntheticShapeConv (ResNeXt-101)mIoU63.26Unverified
NYU-Depth V2ShapeConv (ResNet-101)Mean IoU49Unverified
NYU-Depth V2ShapeConv (ResNet-50)Mean IoU48.8Unverified
NYU-Depth V2ShapeConv (ResNext-101)Mean IoU51.3Unverified
Stanford2D3D - RGBDShapeConv-101mIoU60.6Unverified
SUN-RGBDPSD-ResNet50Mean IoU50.6Unverified
SUN-RGBDPSD-ResNet50Mean IoU45.9Unverified
SUN-RGBDPSD-ResNet50Mean IoU48.6Unverified

Reproductions