Style Normalization and Restitution for Generalizable Person Re-identification
Xin Jin, Cuiling Lan, Wen-Jun Zeng, Zhibo Chen, Li Zhang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/SNROfficialpytorch★ 76
Abstract
Existing fully-supervised person re-identification (ReID) methods usually suffer from poor generalization capability caused by domain gaps. The key to solving this problem lies in filtering out identity-irrelevant interference and learning domain-invariant person representations. In this paper, we aim to design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains. To achieve this goal, we propose a simple yet effective Style Normalization and Restitution (SNR) module. Specifically, we filter out style variations (e.g., illumination, color contrast) by Instance Normalization (IN). However, such a process inevitably removes discriminative information. We propose to distill identity-relevant feature from the removed information and restitute it to the network to ensure high discrimination. For better disentanglement, we enforce a dual causal loss constraint in SNR to encourage the separation of identity-relevant features and identity-irrelevant features. Extensive experiments demonstrate the strong generalization capability of our framework. Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks, and also show superiority on unsupervised domain adaptation.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CUHK03 to Market | SNR | mAP | 52.4 | — | Unverified |
| CUHK03 to MSMT | SNR | mAP | 7.7 | — | Unverified |
| Duke to Market | SNR | mAP | 61.7 | — | Unverified |
| Market to CUHK03 | SNR | mAP | 17.5 | — | Unverified |
| Market to Duke | SNR | mAP | 58.1 | — | Unverified |