SOTAVerified

Self-Correction for Human Parsing

2019-10-22Code Available0· sign in to hype

Peike Li, Yunqiu Xu, Yunchao Wei, Yi Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Labeling pixel-level masks for fine-grained semantic segmentation tasks, e.g. human parsing, remains a challenging task. The ambiguous boundary between different semantic parts and those categories with similar appearance usually are confusing, leading to unexpected noises in ground truth masks. To tackle the problem of learning with label noises, this work introduces a purification strategy, called Self-Correction for Human Parsing (SCHP), to progressively promote the reliability of the supervised labels as well as the learned models. In particular, starting from a model trained with inaccurate annotations as initialization, we design a cyclically learning scheduler to infer more reliable pseudo-masks by iteratively aggregating the current learned model with the former optimal one in an online manner. Besides, those correspondingly corrected labels can in turn to further boost the model performance. In this way, the models and the labels will reciprocally become more robust and accurate during the self-correction learning cycles. Benefiting from the superiority of SCHP, we achieve the best performance on two popular single-person human parsing benchmarks, including LIP and Pascal-Person-Part datasets. Our overall system ranks 1st in CVPR2019 LIP Challenge. Code is available at https://github.com/PeikeLi/Self-Correction-Human-Parsing.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
4D-DRESSSCHP_InnermAcc0.91Unverified
4D-DRESSSCHP_OutermAcc0.86Unverified

Reproductions