SOTAVerified

Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

2021-06-09NeurIPS 2021Code Available1· sign in to hype

Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper presents a simple yet effective approach to modeling space-time correspondences in the context of video object segmentation. Unlike most existing approaches, we establish correspondences directly between frames without re-encoding the mask features for every object, leading to a highly efficient and robust framework. With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion. We cast the aggregation process as a voting problem and find that the existing inner-product affinity leads to poor use of memory with a small (fixed) subset of memory nodes dominating the votes, regardless of the query. In light of this phenomenon, we propose using the negative squared Euclidean distance instead to compute the affinities. We validated that every memory node now has a chance to contribute, and experimentally showed that such diversified voting is beneficial to both memory efficiency and inference accuracy. The synergy of correspondence networks and diversified voting works exceedingly well, achieves new state-of-the-art results on both DAVIS and YouTubeVOS datasets while running significantly faster at 20+ FPS for multiple objects without bells and whistles.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DAVIS 2016STCNJ&F91.7Unverified
DAVIS-2017 (test-dev)STCNJ&F79.9Unverified
DAVIS 2017 (val)STCNJ&F85.3Unverified
MOSESTCNJ&F50.8Unverified
YouTube-VOS 2018STCNJaccard (Seen)83.2Unverified
YouTube-VOS 2019STCN (MS)Overall85.2Unverified
YouTube-VOS 2019STCNOverall84.2Unverified

Reproductions