SOTAVerified

Mask2Former for Video Instance Segmentation

2021-12-20Code Available2· sign in to hype

Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, Alexander G. Schwing

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline. In this report, we show universal image segmentation architectures trivially generalize to video segmentation by directly predicting 3D segmentation volumes. Specifically, Mask2Former sets a new state-of-the-art of 60.4 AP on YouTubeVIS-2019 and 52.6 AP on YouTubeVIS-2021. We believe Mask2Former is also capable of handling video semantic and panoptic segmentation, given its versatility in image segmentation. We hope this will make state-of-the-art video segmentation research more accessible and bring more attention to designing universal image and video segmentation architectures.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
OVIS validationMask2Former-VISmask AP16.6Unverified
YouTube-VIS validationMask2Former (Swin-L)mask AP60.4Unverified
YouTube-VIS validationMask2Former (ResNet-101)mask AP49.2Unverified
YouTube-VIS validationMask2Former (ResNet-50)mask AP46.4Unverified

Reproductions