SOTAVerified

Unified Perceptual Parsing for Scene Understanding

2018-07-26ECCV 2018Code Available1· sign in to hype

Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. Models are available at https://github.com/CSAILVision/unifiedparsing.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WildScenesUPerNet (ConvNeXt-L)mIoU47.3Unverified

Reproductions