SOTAVerified

LIP: Local Importance-based Pooling

2019-08-12ICCV 2019Code Available0· sign in to hype

Ziteng Gao, Li-Min Wang, Gangshan Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Spatial downsampling layers are favored in convolutional neural networks (CNNs) to downscale feature maps for larger receptive fields and less memory consumption. However, for discriminative tasks, there is a possibility that these layers lose the discriminative details due to improper pooling strategies, which could hinder the learning process and eventually result in suboptimal models. In this paper, we present a unified framework over the existing downsampling layers (e.g., average pooling, max pooling, and strided convolution) from a local importance view. In this framework, we analyze the issues of these widely-used pooling layers and figure out the criteria for designing an effective downsampling layer. According to this analysis, we propose a conceptually simple, general, and effective pooling layer based on local importance modeling, termed as Local Importance-based Pooling (LIP). LIP can automatically enhance discriminative features during the downsampling procedure by learning adaptive importance weights based on inputs. Experiment results show that LIP consistently yields notable gains with different depths and different architectures on ImageNet classification. In the challenging MS COCO dataset, detectors with our LIP-ResNets as backbones obtain a consistent improvement ( 1.4\%) over the vanilla ResNets, and especially achieve the current state-of-the-art performance in detecting small objects under the single-scale testing scheme.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetLIP-ResNet-101Top 1 Accuracy79.33Unverified
ImageNetResNet-50 (LIP Bottleneck-256)Top 1 Accuracy78.15Unverified
ImageNetLIP-DenseNet-BC-121Top 1 Accuracy76.64Unverified

Reproductions