SOTAVerified

Online Adaptation of Convolutional Neural Networks for Video Object Segmentation

2017-06-28Unverified0· sign in to hype

Paul Voigtlaender, Bastian Leibe

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DAVIS 2016OnAVOSJ&F85.5Unverified
DAVIS-2017 (test-dev)OnAVOSJ&F52.8Unverified
DAVIS 2017 (val)OnAVOSJ&F65.35Unverified
YouTubeOnAVOSmIoU0.77Unverified

Reproductions