Unified Perceptual Parsing for Scene Understanding
Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/CSAILVision/unifiedparsingOfficialIn paperpytorch★ 0
- github.com/open-mmlab/mmsegmentationpytorch★ 9,690
- github.com/CSAILVision/semantic-segmentation-pytorchpytorch★ 5,064
- github.com/sithu31296/semantic-segmentationpytorch★ 939
- github.com/dingmyu/davitpytorch★ 374
- github.com/Burf/tfdetectiontf★ 56
- github.com/ESA-PhiLab/PhilEO-MajorTOMpytorch★ 6
- github.com/MindCode-4/code-5/tree/main/videomaemindspore★ 0
- github.com/MindCode-4/code-1/tree/main/upernetmindspore★ 0
- github.com/Rosie-Brigham/sesmegpytorch★ 0
Abstract
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. Models are available at https://github.com/CSAILVision/unifiedparsing.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| WildScenes | UPerNet (ConvNeXt-L) | mIoU | 47.3 | — | Unverified |