SOTAVerified

Devil's in the Details: Aligning Visual Clues for Conditional Embedding in Person Re-Identification

2020-09-11Code Available0· sign in to hype

Fufu Yu, Xinyang Jiang, Yifei Gong, Shizhen Zhao, Xiaowei Guo, Wei-Shi Zheng, Feng Zheng, Xing Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Although Person Re-Identification has made impressive progress, difficult cases like occlusion, change of view-pointand similar clothing still bring great challenges. Besides overall visual features, matching and comparing detailed information is also essential for tackling these challenges. This paper proposes two key recognition patterns to better utilize the detail information of pedestrian images, that most of the existing methods are unable to satisfy. Firstly, Visual Clue Alignment requires the model to select and align decisive regions pairs from two images for pair-wise comparison, while existing methods only align regions with predefined rules like high feature similarity or same semantic labels. Secondly, the Conditional Feature Embedding requires the overall feature of a query image to be dynamically adjusted based on the gallery image it matches, while most of the existing methods ignore the reference images. By introducing novel techniques including correspondence attention module and discrepancy-based GCN, we propose an end-to-end ReID method that integrates both patterns into a unified framework, called CACE-Net((C)lue(A)lignment and (C)onditional (E)mbedding). The experiments show that CACE-Net achieves state-of-the-art performance on three public datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CUHK03-CCaceNet Rank-117.04Unverified
DukeMTMC-reIDCACENET (ResNet50 w/o RK)mAP81.29Unverified
Market-1501CACENET (Resnet50 without RR)Rank-195.96Unverified
MSMT17CACENET (ResNet50 w/o RR)mAP62Unverified

Reproductions