SOTAVerified

Missing Modality Robustness in Semi-Supervised Multi-Modal Semantic Segmentation

2023-04-21Code Available1· sign in to hype

Harsh Maheshwari, Yen-Cheng Liu, Zsolt Kira

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Using multiple spatial modalities has been proven helpful in improving semantic segmentation performance. However, there are several real-world challenges that have yet to be addressed: (a) improving label efficiency and (b) enhancing robustness in realistic scenarios where modalities are missing at the test time. To address these challenges, we first propose a simple yet efficient multi-modal fusion mechanism Linear Fusion, that performs better than the state-of-the-art multi-modal models even with limited supervision. Second, we propose M3L: Multi-modal Teacher for Masked Modality Learning, a semi-supervised framework that not only improves the multi-modal performance but also makes the model robust to the realistic missing modality scenario using unlabeled data. We create the first benchmark for semi-supervised multi-modal semantic segmentation and also report the robustness to missing modalities. Our proposal shows an absolute improvement of up to 10% on robust mIoU above the most competitive baselines. Our code is available at https://github.com/harshm121/M3L

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Stanford2D3D - RGBDLinear Fusion (Segformer B2)mIoU57.16Unverified
SUN-RGBDDFormer-LMean IoU49.6Unverified
SUN-RGBDDFormer-LMean IoU48.3Unverified
SUN-RGBDDFormer-LMean IoU44.3Unverified
SUN-RGBDDFormer-LMean IoU52.5Unverified
SUN-RGBDDFormer-LMean IoU (test)48.17Unverified

Reproductions