SOTAVerified

Learning Implicitly Recurrent CNNs Through Parameter Sharing

2019-02-26ICLR 2019Code Available0· sign in to hype

Pedro Savarese, Michael Maire

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy. Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the design aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures. Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10Shared WRNPercentage correct97.47Unverified
CIFAR-100Shared WRNPercentage correct82.57Unverified

Reproductions