Distilling the Past: Information-Dense and Style-Aware Replay for Lifelong Person Re-Identification
Mingyu Wang, Wei Jiang, Haojie Liu, Zhiyong Li, Q. M. Jonathan Wu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Lifelong person re-identification (LReID) aims to continuously adapt to new domains while mitigating catastrophic forgetting. While replay-based methods effectively alleviate forgetting, they are constrained by strict memory budgets, leading to limited sample diversity. Conversely, exemplar-free approaches bypass memory constraints entirely but struggle to preserve the fine-grained identity semantics crucial for Re-ID tasks. To resolve this fundamental dilemma, we propose an Information-Dense and Style-Aware Replay framework. Instead of storing a sparse set of raw historical images, we fuse the knowledge of sequential data into the pixel space of a compact replay buffer via multi-stage gradient matching and identity supervision. This condensation process not only maximizes the semantic representativeness of limited memory but also naturally conceals original visual details, inherently preserving data privacy. Furthermore, to combat forgetting induced by cross-domain shifts, we introduce a dual-alignment style replay strategy that adapts both current and fused replay samples, harmonizing feature representations across disparate domains. Extensive experiments on multiple LReID benchmarks demonstrate that our method significantly outperforms existing approaches, achieving improvements of +5.0% and +6.0% in Seen-Avg mAP over current state-of-the-art and traditional replay-based methods, respectively, thereby establishing an efficient and robust new baseline for lifelong learning.