SOTAVerified

Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping

2023-04-17CVPR 2023Code Available1· sign in to hype

Long Lian, Zhirong Wu, Stella X. Yu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We study learning object segmentation from unlabeled videos. Humans can easily segment moving objects without knowing what they are. The Gestalt law of common fate, i.e., what move at the same speed belong together, has inspired unsupervised object discovery based on motion segmentation. However, common fate is not a reliable indicator of objectness: Parts of an articulated / deformable object may not move at the same speed, whereas shadows / reflections of an object always move with it but are not part of it. Our insight is to bootstrap objectness by first learning image features from relaxed common fate and then refining them based on visual appearance grouping within the image itself and across images statistically. Specifically, we learn an image segmenter first in the loop of approximating optical flow with constant segment flow plus small within-segment residual flow, and then by refining it for more coherent appearance and statistical figure-ground relevance. On unsupervised video object segmentation, using only ResNet and convolutional heads, our model surpasses the state-of-the-art by absolute gains of 7/9/5% on DAVIS16 / STv2 / FBMS59 respectively, demonstrating the effectiveness of our ideas. Our code is publicly available.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DAVIS 2016RCF (with Post-Processing)J score83Unverified
DAVIS 2016RCF (without Post-Processing)J score80.9Unverified
FBMS-59RCF (with post-processing)mIoU72.4Unverified
FBMS-59RCF (without post-processing)mIoU69.9Unverified
SegTrack-v2RCF (with post-processing)mIoU79.6Unverified
SegTrack-v2RCF (without post-processing)mIoU76.7Unverified

Reproductions