SOTAVerified

Learning Scene-Pedestrian Graph for End to end Person Search

2023-08-09IEEE Transactions on Industrial Informatics 2023Code Available0· sign in to hype

Song, Zifan and Zhao, Cairong and Hu, Guosheng and Miao, Duoqian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Person search aims to find specific persons from visual scenes, including two subtasks, pedestrian detection, and person reidentification. The dominant fashion in this area is end-to-end networks that focus on analyzing the foreground (i.e., pedestrian) while ignoring the background (i.e., scene) information. However, the scene information often offers useful clues for person search. For example, pedestrians normally appear on the road rather than the top of a tree, and pedestrians appearing at the same location are likely to have similar occlusions. The interplay between the pedestrians and scenes can potentially improve the performance. In this article, a novel scene-pedestrian graph (SPG) is proposed, which can explicitly model the interplay between the pedestrians and scenes. To polish the quality of pedestrian bounding boxes, we pioneer a strategy of using the high-quality pedestrian bounding box to guide the low-quality one in the same scene. In addition, we design a contextual and temporal graph matching algorithm to effectively utilize the contextual and temporal information present in the constructed SPG to improve the performance of pedestrian matching. Benefiting from the robustness on complex scenes, our model achieves promising performance over the state-of-the-art methods on two popular person search benchmarks, CUHK-SYSU and PRW.

Tasks

Reproductions