SOTAVerified

Learning to Exploit Multiple Vision Modalities by Using Grafted Networks

2020-03-24ECCV 2020Unverified0· sign in to hype

Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Novel vision sensors such as thermal, hyperspectral, polarization, and event cameras provide information that is not available from conventional intensity cameras. An obstacle to using these sensors with current powerful deep neural networks is the lack of large labeled training datasets. This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames. The self-supervised training uses only synchronously-recorded intensity frames and novel sensor data to maximize feature similarity between the pretrained network and the grafted network. We show that the enhanced grafted network reaches competitive average precision (AP50) scores to the pretrained network on an object detection task using thermal and event camera datasets, with no increase in inference costs. Particularly, the grafted network driven by thermal frames showed a relative improvement of 49.11% over the use of intensity frames. The grafted front end has only 5--8% of the total parameters and can be trained in a few hours on a single GPU equivalent to 5% of the time that would be needed to train the entire object detector from labeled data. NGA allows new vision sensors to capitalize on previously pretrained powerful deep models, saving on training cost and widening a range of applications for novel sensors.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MVSEC-SEGNGAmIoU0.32Unverified
RGBE-SEGNGAmIoU0.3Unverified

Reproductions