SOTAVerified

Spatial Feature Calibration and Temporal Fusion for Effective One-stage Video Instance Segmentation

2021-04-06CVPR 2021Code Available1· sign in to hype

Minghan Li, Shuai Li, Lida Li, Lei Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Modern one-stage video instance segmentation networks suffer from two limitations. First, convolutional features are neither aligned with anchor boxes nor with ground-truth bounding boxes, reducing the mask sensitivity to spatial location. Second, a video is directly divided into individual frames for frame-level instance segmentation, ignoring the temporal correlation between adjacent frames. To address these issues, we propose a simple yet effective one-stage video instance segmentation framework by spatial calibration and temporal fusion, namely STMask. To ensure spatial feature calibration with ground-truth bounding boxes, we first predict regressed bounding boxes around ground-truth bounding boxes, and extract features from them for frame-level instance segmentation. To further explore temporal correlation among video frames, we aggregate a temporal fusion module to infer instance masks from each frame to its adjacent frames, which helps our framework to handle challenging videos such as motion blur, partial occlusion and unusual object-to-camera poses. Experiments on the YouTube-VIS valid set show that the proposed STMask with ResNet-50/-101 backbone obtains 33.5 % / 36.8 % mask AP, while achieving 28.6 / 23.4 FPS on video instance segmentation. The code is released online https://github.com/MinghanLi/STMask.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
OVIS validationSTMask(R101-DCN-FPN)mask AP17.3Unverified
YouTube-VIS 2021STMask(R101-DCN-FPN)mask AP34.6Unverified
YouTube-VIS validationSTMask(R101-DCN-FPN)mask AP36.8Unverified

Reproductions