SOTAVerified

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

2018-08-11Code Available0· sign in to hype

Abhinav Valada, Rohit Mohan, Wolfram Burgard

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10x fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on several benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cityscapes testSSMAMean IoU (class)82.3Unverified
Cityscapes testAdapNet++Mean IoU (class)81.24Unverified
Freiburg ForestAdapNet++Mean IoU83.09Unverified
Freiburg ForestSSMAMean IoU84.18Unverified
ScanNetV2SSMAMean IoU57.7Unverified
ScanNetV2AdapNet++Mean IoU50.3Unverified
SUN-RGBDDPLNetMean IoU52.8Unverified
SUN-RGBDDPLNetMean IoU49.7Unverified
SUN-RGBDDPLNetMean IoU48.47Unverified
SUN-RGBDDPLNetMean IoU38.4Unverified
SUN-RGBDDPLNetMean IoU45.1Unverified
SYNTHIA-CVPR’16AdapNet++Mean IoU87.87Unverified
SYNTHIA-CVPR’16SSMAMean IoU92.1Unverified

Reproductions