SOTAVerified

XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers For Convolutional Neural Networks

2021-11-21Code Available0· sign in to hype

Jian Sun, Ali Pourramezan Fard, Mohammad H. Mahoor

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Capsule Network is powerful at defining the positional relationship between features in deep neural networks for visual recognition tasks, but it is computationally expensive and not suitable for running on mobile devices. The bottleneck is in the computational complexity of the Dynamic Routing mechanism used between the capsules. On the other hand, XNOR-Net is fast and computationally efficient, though it suffers from low accuracy due to information loss in the binarization process. To address the computational burdens of the Dynamic Routing mechanism, this paper proposes new Fully Connected (FC) layers by xnorizing the linear projection outside or inside the Dynamic Routing within the CapsFC layer. Specifically, our proposed FC layers have two versions, XnODR (Xnorize the Linear Projection Outside Dynamic Routing) and XnIDR (Xnorize the Linear Projection Inside Dynamic Routing). To test the generalization of both XnODR and XnIDR, we insert them into two different networks, MobileNetV2 and ResNet-50. Our experiments on three datasets, MNIST, CIFAR-10, and MultiMNIST validate their effectiveness. The results demonstrate that both XnODR and XnIDR help networks to have high accuracy with lower FLOPs and fewer parameters (e.g., 96.14% correctness with 2.99M parameters and 311.74M FLOPs on CIFAR-10).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10ResNet_XnIDRPercentage correct96.87Unverified
MNISTMobileNet_XnODRAccuracy99.68Unverified

Reproductions