SOTAVerified

Interpretable and Generalizable Person Re-Identification with Query-Adaptive Convolution and Temporal Lifting

2019-04-23ECCV 2020Code Available0· sign in to hype

Shengcai Liao, Ling Shao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

For person re-identification, existing deep networks often focus on representation learning. However, without transfer learning, the learned model is fixed as is, which is not adaptable for handling various unseen scenarios. In this paper, beyond representation learning, we consider how to formulate person image matching directly in deep feature maps. We treat image matching as finding local correspondences in feature maps, and construct query-adaptive convolution kernels on the fly to achieve local matching. In this way, the matching process and results are interpretable, and this explicit matching is more generalizable than representation features to unseen scenarios, such as unknown misalignments, pose or viewpoint changes. To facilitate end-to-end training of this architecture, we further build a class memory module to cache feature maps of the most recent samples of each class, so as to compute image matching losses for metric learning. Through direct cross-dataset evaluation, the proposed Query-Adaptive Convolution (QAConv) method gains large improvements over popular learning methods (about 10%+ mAP), and achieves comparable results to many transfer learning methods. Besides, a model-free temporal cooccurrence based score weighting method called TLift is proposed, which improves the performance to a further extent, achieving state-of-the-art results in cross-dataset person re-identification. Code is available at https://github.com/ShengcaiLiao/QAConv.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CUHK03 to MarketQAConvmAP66.5Unverified
Market to CUHK03QAConvmAP32.9Unverified

Reproductions