SOTAVerified

Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty

2021-07-09Unverified0· sign in to hype

Moritz Knolle, Alexander Ziller, Dmitrii Usynin, Rickmer Braren, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and exploit parallels between stochastic gradient Langevin dynamics, a scalable Bayesian inference technique for training deep neural networks, and DP-SGD, in order to train differentially private, Bayesian neural networks with minor adjustments to the original (DP-SGD) algorithm. Our approach provides considerably more reliable uncertainty estimates than DP-SGD, as demonstrated empirically by a reduction in expected calibration error (MNIST 5-fold, Pediatric Pneumonia Dataset 2-fold).

Tasks

Reproductions