SOTAVerified

Neural networks with late-phase weights

2020-07-25ICLR 2021Code Available0· sign in to hype

Johannes von Oswald, Seijin Kobayashi, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD). Here, we show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning. At the end of learning, we obtain back a single model by taking a spatial average in weight space. To avoid incurring increased computational costs, we investigate a family of low-dimensional late-phase weight models which interact multiplicatively with the remaining parameters. Our results show that augmenting standard models with late-phase weights improves generalization in established benchmarks such as CIFAR-10/100, ImageNet and enwik8. These findings are complemented with a theoretical analysis of a noisy quadratic problem which provides a simplified picture of the late phases of neural network learning.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10WRN 28-14Percentage correct97.45Unverified
CIFAR-10WRN 28-10Percentage correct96.81Unverified
CIFAR-100WRN 28-14Percentage correct85Unverified
CIFAR-100WRN 28-10Percentage correct83.06Unverified

Reproductions