SOTAVerified

3D Shape Temporal Aggregation for Video-Based Clothing-Change Person Re-Identication

2023-03-09Asian Conference on Computer Vision 2023Code Available1· sign in to hype

Ke Han, Shaogang Gong, Yan Huang, Liang Wang, Tieniu Tan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

3D shape of human body can be both discriminative and clothing-independent information in video-based clothing-change person re-identification (Re-ID). However, existing Re-ID methods usually generate 3D body shapes without considering identity modeling, which severely weakens the discriminability of 3D human shapes. In addition, different video frames provide highly similar 3D shapes, but existing methods cannot capture the differences among 3D shapes over time. They are thus insensitive to the unique and discriminative 3D shape information of each frame and ineffectively aggregate many redundant framewise shapes in a video-wise representation for Re-ID. To address these problems, we propose a 3D Shape Temporal Aggregation (3STA) model for video-based clothing-change Re-ID. To generate the discriminative 3D shape for each frame, we rst introduce an identity-aware 3D shape generation module. It embeds the identity information into the generation of 3D shapes by the joint learning of shape estimation and identity recognition. Second, a difference-aware shape aggregation module is designed to measure inter-frame 3D human shape differences and automatically select the unique 3D shape information of each frame. This helps minimize redundancy and maximize complementarity in temporal shape aggregation. We further construct a Video-based Clothing-Change ReID (VCCR) dataset to address the lack of publicly available datasets for video-based clothing-change Re-ID. Extensive experiments on the VCCR dataset demonstrate the effectiveness of the proposed 3STA model. The dataset is available at https://vhank.github.io/vccr.github.io.

Tasks

Reproductions