SOTAVerified

REVE: Regularizing Deep Learning with Variational Entropy Bound

2019-10-15Unverified0· sign in to hype

Antoine Saporta, Yifu Chen, Michael Blot, Matthieu Cord

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Studies on generalization performance of machine learning algorithms under the scope of information theory suggest that compressed representations can guarantee good generalization, inspiring many compression-based regularization methods. In this paper, we introduce REVE, a new regularization scheme. Noting that compressing the representation can be sub-optimal, our first contribution is to identify a variable that is directly responsible for the final prediction. Our method aims at compressing the class conditioned entropy of this latter variable. Second, we introduce a variational upper bound on this conditional entropy term. Finally, we propose a scheme to instantiate a tractable loss that is integrated within the training procedure of the neural network and demonstrate its efficiency on different neural networks and datasets.

Tasks

Reproductions