SOTAVerified

Are Saddles Good Enough for Deep Learning?

2017-06-07Code Available0· sign in to hype

Adepu Ravi Sankar, Vineeth N. Balasubramanian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as `good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.

Tasks

Reproductions