SOTAVerified

Weight-Sharing Regularization

2023-11-06Code Available0· sign in to hype

Mehran Shakerinava, Motahareh Sohrabi, Siamak Ravanbakhsh, Simon Lacoste-Julien

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Weight-sharing is ubiquitous in deep learning. Motivated by this, we propose a "weight-sharing regularization" penalty on the weights w R^d of a neural network, defined as R(w) = 1d - 1_i > j^d |w_i - w_j|. We study the proximal mapping of R and provide an intuitive interpretation of it in terms of a physical system of interacting particles. We also parallelize existing algorithms for prox_R (to run on GPU) and find that one of them is fast in practice but slow (O(d)) for worst-case inputs. Using the physical interpretation, we design a novel parallel algorithm which runs in O(^3 d) when sufficient processors are available, thus guaranteeing fast training. Our experiments reveal that weight-sharing regularization enables fully connected networks to learn convolution-like filters even when pixels have been shuffled while convolutional neural networks fail in this setting. Our code is available on github.

Tasks

Reproductions