SOTAVerified

Fully Convolutional Networks for Panoptic Segmentation

2020-12-01CVPR 2021Code Available1· sign in to hype

Yanwei Li, Hengshuang Zhao, Xiaojuan Qi, LiWei Wang, Zeming Li, Jian Sun, Jiaya Jia

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we present a conceptually simple, strong, and efficient framework for panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline. In particular, Panoptic FCN encodes each object instance or stuff category into a specific kernel weight with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms previous box-based and -free models with high efficiency on COCO, Cityscapes, and Mapillary Vistas datasets with single scale input. Our code is made publicly available at https://github.com/Jia-Research-Lab/PanopticFCN.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Cityscapes valPanoptic FCN* (ResNet-FPN)PQ61.4Unverified
Cityscapes valPanoptic FCN* (Swin-L, Cityscapes-fine)PQst70.6Unverified
Cityscapes valPanoptic FCN* (ResNet-50-FPN)PQst66.6Unverified
COCO minivalPanoptic FCN* (Swin-L, single-scale)PQth58.5Unverified
COCO minivalPanoptic FCN* (ResNet-50-FPN)PQ44.3Unverified
COCO test-devPanoptic FCN* (Swin-L)PQ52.7Unverified
COCO test-devPanoptic FCN*++ (DCN-101-FPN)PQ47.5Unverified
Mapillary valPanoptic FCN* (ResNet-50-FPN)PQst42.3Unverified
Mapillary valPanoptic FCN* (Swin-L, single-scale)PQ45.7Unverified
Mapillary valPanoptic FCN* (ResNet-FPN)PQ36.9Unverified

Reproductions