SOTAVerified

Exchangeability and Kernel Invariance in Trained MLPs

2018-10-19Unverified0· sign in to hype

Russell Tsuchida, Fred Roosta, Marcus Gallagher

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.

Tasks

Reproductions