NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation
Tim Meinhardt, Matt Feiszli, Yuchen Fan, Laura Leal-Taixe, Rakesh Ranjan
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Until recently, the Video Instance Segmentation (VIS) community operated under the common belief that offline methods are generally superior to a frame by frame online processing. However, the recent success of online methods questions this belief, in particular, for challenging and long video sequences. We understand this work as a rebuttal of those recent observations and an appeal to the community to focus on dedicated near-online VIS approaches. To support our argument, we present a detailed analysis on different processing paradigms and the new end-to-end trainable NOVIS (Near-Online Video Instance Segmentation) method. Our transformer-based model directly predicts spatio-temporal mask volumes for clips of frames and performs instance tracking between clips via overlap embeddings. NOVIS represents the first near-online VIS approach which avoids any handcrafted tracking heuristics. We outperform all existing VIS methods by large margins and provide new state-of-the-art results on both YouTube-VIS (2019/2021) and the OVIS benchmarks.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| OVIS validation | NOVIS (Swin-L) | mask AP | 43.5 | — | Unverified |
| OVIS validation | NOVIS (ResNet-50) | mask AP | 32.7 | — | Unverified |
| YouTube-VIS 2021 | NOVIS (Swin-L) | mask AP | 59.8 | — | Unverified |
| YouTube-VIS 2021 | NOVIS (ResNet-50) | mask AP | 47.2 | — | Unverified |
| YouTube-VIS validation | NOVIS (ResNet-50) | mask AP | 52.8 | — | Unverified |