SOTAVerified

GSVNet: Guided Spatially-Varying Convolution for Fast Semantic Segmentation on Video

2021-03-16Code Available0· sign in to hype

Shih-Po Lee, Si-Cun Chen, Wen-Hsiao Peng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper addresses fast semantic segmentation on video.Video segmentation often calls for real-time, or even fasterthan real-time, processing. One common recipe for conserving computation arising from feature extraction is to propagate features of few selected keyframes. However, recent advances in fast image segmentation make these solutions less attractive. To leverage fast image segmentation for furthering video segmentation, we propose a simple yet efficient propagation framework. Specifically, we perform lightweight flow estimation in 1/8-downscaled image space for temporal warping in segmentation outpace space. Moreover, we introduce a guided spatially-varying convolution for fusing segmentations derived from the previous and current frames, to mitigate propagation error and enable lightweight feature extraction on non-keyframes. Experimental results on Cityscapes and CamVid show that our scheme achieves the state-of-the-art accuracy-throughput trade-off on video segmentation.

Tasks

Reproductions