CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes
Yuhong Li, Xiaofan Zhang, Deming Chen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/xr0927/chapter5-learning_CSRNetpytorch★ 0
- github.com/RTalha/CROWD-COUNTING-USING-CSRNETtf★ 0
- github.com/Bazingaliu/learning_CSRNetpytorch★ 0
- github.com/krutikabapat/Crowd_Countingnone★ 0
- github.com/dattatrayshinde/oc_sdpytorch★ 0
- github.com/CommissarMa/CSRNet-pytorchpytorch★ 0
- github.com/leeyeehoo/CSRNet-pytorchpytorch★ 0
- github.com/Neerajj9/CSRNet-kerastf★ 0
- github.com/Saritus/Crowd-Countertf★ 0
- github.com/DiaoXY/CSRnettf★ 0
Abstract
We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure. We demonstrate CSRNet on four datasets (ShanghaiTech dataset, the UCF_CC_50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset) and we deliver the state-of-the-art performance. In the ShanghaiTech Part_B dataset, CSRNet achieves 47.3% lower Mean Absolute Error (MAE) than the previous state-of-the-art method. We extend the targeted applications for counting other objects, such as the vehicle in TRANCOS dataset. Results show that CSRNet significantly improves the output quality with 15.4% lower MAE than the previous state-of-the-art approach.