SOTAVerified

MGIC: Multigrid-in-Channels Neural Network Architectures

2020-11-17NeurIPS Workshop DLDE 2021Code Available0· sign in to hype

Moshe Eliasof, Jonathan Ephrath, Lars Ruthotto, Eran Treister

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a multigrid-in-channels (MGIC) approach that tackles the quadratic growth of the number of parameters with respect to the number of channels in standard convolutional neural networks (CNNs). Thereby our approach addresses the redundancy in CNNs that is also exposed by the recent success of lightweight CNNs. Lightweight CNNs can achieve comparable accuracy to standard CNNs with fewer parameters; however, the number of weights still scales quadratically with the CNN's width. Our MGIC architectures replace each CNN block with an MGIC counterpart that utilizes a hierarchy of nested grouped convolutions of small group size to address this. Hence, our proposed architectures scale linearly with respect to the network's width while retaining full coupling of the channels as in standard CNNs. Our extensive experiments on image classification, segmentation, and point cloud classification show that applying this strategy to different architectures like ResNet and MobileNetV3 reduces the number of parameters while obtaining similar or better accuracy.

Tasks

Reproductions