SOTAVerified

Perceive Where to Focus: Learning Visibility-aware Part-level Features for Partial Person Re-identification

2019-04-01CVPR 2019Code Available0· sign in to hype

Yifan Sun, Qin Xu, Ya-Li Li, Chi Zhang, Yikang Li, Shengjin Wang, Jian Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper considers a realistic problem in person re-identification (re-ID) task, i.e., partial re-ID. Under partial re-ID scenario, the images may contain a partial observation of a pedestrian. If we directly compare a partial pedestrian image with a holistic one, the extreme spatial misalignment significantly compromises the discriminative ability of the learned representation. We propose a Visibility-aware Part Model (VPM), which learns to perceive the visibility of regions through self-supervision. The visibility awareness allows VPM to extract region-level features and compare two images with focus on their shared regions (which are visible on both images). VPM gains two-fold benefit toward higher accuracy for partial re-ID. On the one hand, compared with learning a global feature, VPM learns region-level features and benefits from fine-grained information. On the other hand, with visibility awareness, VPM is capable to estimate the shared regions between two images and thus suppresses the spatial misalignment. Experimental results confirm that our method significantly improves the learned representation and the achieved accuracy is on par with the state of the art.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Market-1501-CVPM Rank-131.17Unverified

Reproductions