Singular Bayesian Neural Networks
Mame Diarra Toure, David A. Stephens
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Bayesian neural networks promise calibrated uncertainty but require O(mn) parameters for standard mean-field Gaussian posteriors. We argue this cost is often unnecessary, particularly when weight matrices exhibit fast singular value decay. By parameterizing weights as W = AB^ with A R^m r, B R^n r, we induce a posterior that is singular with respect to the Lebesgue measure, concentrating on the rank-r manifold. This singularity captures structured weight correlations through shared latent factors, geometrically distinct from mean-field's independence assumption. We derive PAC-Bayes generalization bounds whose complexity term scales as r(m+n) instead of m n, and prove loss bounds that decompose the error into optimization and rank-induced bias using the Eckart-Young-Mirsky theorem. We further adapt recent Gaussian complexity bounds for low-rank deterministic networks to Bayesian predictive means. Empirically, across MLPs, LSTMs, and Transformers on standard benchmarks, our method achieves predictive performance competitive with 5-member Deep Ensembles while using up to 15 fewer parameters. Furthermore, it substantially improves OOD detection and often improves calibration relative to mean-field and perturbation baselines.