SOTAVerified

One-Shot Segmentation in Clutter

2018-03-26ICML 2018Code Available0· sign in to hype

Claudio Michaelis, Matthias Bethge, Alexander S. Ecker

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example. We propose a novel dataset, which we call cluttered Omniglot. Using a baseline architecture combining a Siamese embedding for detection with a U-net for segmentation we show that increasing levels of clutter make the task progressively harder. Using oracle models with access to various amounts of ground-truth information, we evaluate different aspects of the problem and show that in this kind of visual search task, detection and segmentation are two intertwined problems, the solution to each of which helps solving the other. We therefore introduce MaskNet, an improved model that attends to multiple candidate locations, generates segmentation proposals to mask out background clutter and selects among the segmented objects. Our findings suggest that such image recognition models based on an iterative refinement of object detection and foreground segmentation may provide a way to deal with highly cluttered scenes.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cluttered OmniglotMaskNetIoU [32 distractors]65.6Unverified
Cluttered OmniglotSiamese-U-NetIoU [32 distractors]62.4Unverified

Reproductions