SOTAVerified

Volumization as a Natural Generalization of Weight Decay

2020-03-25Unverified0· sign in to hype

Liu Ziyin, ZiHao Wang, Makoto Yamada, Masahito Ueda

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a novel regularization method, called volumization, for neural networks. Inspired by physics, we define a physical volume for the weight parameters in neural networks, and we show that this method is an effective way of regularizing neural networks. Intuitively, this method interpolates between an L_2 and L_ regularization. Therefore, weight decay and weight clipping become special cases of the proposed algorithm. We prove, on a toy example, that the essence of this method is a regularization technique to control bias-variance tradeoff. The method is shown to do well in the categories where the standard weight decay method is shown to work well, including improving the generalization of networks and preventing memorization. Moreover, we show that the volumization might lead to a simple method for training a neural network whose weight is binary or ternary.

Tasks

Reproductions