SOTAVerified

Deep Interactive Object Selection

2016-03-13CVPR 2016Code Available0· sign in to hype

Ning Xu, Brian Price, Scott Cohen, Jimei Yang, Thomas Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Interactive object selection is a very important research problem and has many applications. Previous algorithms require substantial user interactions to estimate the foreground and background distributions. In this paper, we present a novel deep learning based algorithm which has a much better understanding of objectness and thus can reduce user interactions to just a few clicks. Our algorithm transforms user provided positive and negative clicks into two Euclidean distance maps which are then concatenated with the RGB channels of images to compose (image, user interactions) pairs. We generate many of such pairs by combining several random sampling strategies to model user click patterns and use them to fine tune deep Fully Convolutional Networks (FCNs). Finally the output probability maps of our FCN 8s model is integrated with graph cut optimization to refine the boundary segments. Our model is trained on the PASCAL segmentation dataset and evaluated on other datasets with different object classes. Experimental results on both seen and unseen objects clearly demonstrate that our algorithm has a good generalization ability and is superior to all existing interactive object selection approaches.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DAVISDOS with GCNoC@9012.58Unverified
DAVISDOS w/o GCNoC@9017.11Unverified
GrabCutDOS with GCNoC@906.08Unverified
GrabCutDOS w/o GCNoC@9012.59Unverified
SBDDOS with GCNoC@9012.8Unverified
SBDDOS w/o GCNoC@9016.79Unverified

Reproductions