SOTAVerified

Posterior Concentration for Sparse Deep Learning

2018-03-24NeurIPS 2018Unverified0· sign in to hype

Nicholas Polson, Veronika Rockova

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to Dropout for improving generalizability of deep ReLU networks. This new type of regularization enables provable recovery of smooth input-output maps with unknown levels of smoothness. Indeed, we show that the posterior distribution concentrates at the near minimax rate for -H\"older smooth maps, performing as well as if we knew the smoothness level ahead of time. Our result sheds light on architecture design for deep neural networks, namely the choice of depth, width and sparsity level. These network attributes typically depend on unknown smoothness in order to be optimal. We obviate this constraint with the fully Bayes construction. As an aside, we show that SS-DL does not overfit in the sense that the posterior concentrates on smaller networks with fewer (up to the optimal number of) nodes and links. Our results provide new theoretical justifications for deep ReLU networks from a Bayesian point of view.

Tasks

Reproductions