SOTAVerified

Improving Local Features with Relevant Spatial Information by Vision Transformer for Crowd Counting

2022-09-30British Machine Vision Conference (BMVC) 2022Code Available0· sign in to hype

Nguyen H. Tran, Ta Duc Huy, Soan T. M. Duong, Phan Nguyen, Dao Huu Hung, Chanh D. Tr. Nguyen, Trung Bui, Steven Q.H. Truong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision Transformer (ViT) variants have demonstrated state-of-the-art performances in plenty of computer vision benchmarks, including crowd counting. Although Transformer based models have shown breakthroughs in crowd counting, existing methods have some limitations. Global embeddings extracted from ViTs do not encapsulate finegrained local features and, thus, are prone to errors in crowded scenes with diverse human scales and densities. In this paper, we propose LoViTCrowd with the argument that: LOcal features with spatial information from relevant regions via the attention mechanism of ViT can effectively reduce the crowd counting error. To this end, we divide each image into a cell grid. Considering patches of 3 × 3 cells, in which the main parts of the human body are encapsulated, the surrounding cells provide meaningful cues for crowd estimation. ViT is adapted on each patch to employ the attention mechanism across the 3 × 3 cells to count the number of people in the central cell. The number of people in the image is obtained by summing up the counts of its non-overlapping cells. Extensive experiments on four public datasets of sparse and dense scenes, i.e., Mall, ShanghaiTech Part A, ShanghaiTech Part B, and UCF-QNRF, demonstrate our method’s state-of-the-art performance. Compared to TransCrowd, LoViTCrowd reduces the root mean square errors (RMSE) and the mean absolute errors (MAE) by an average of 14.2% and 9.7%, respectively. The source is available at https://github.com/nguyen1312/LoViTCrowd

Tasks

Reproductions