PAC-Bayes and Information Complexity
2021-03-04ICLR Workshop Neural_Compression 2021Unverified0· sign in to hype
Pradeep Kr. Banerjee, Guido Montufar
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We point out that a number of well-known PAC-Bayesian-style and information-theoretic generalization bounds for randomized learning algorithms can be derived under a common framework starting from a fundamental information exponential inequality. We also obtain new bounds for data-dependent priors and unbounded loss functions. Optimizing these bounds naturally gives rise to a method called Information Complexity Minimization for which we discuss two practical examples for learning with neural networks, namely Entropy- and PAC-Bayes- SGD.