SOTAVerified

Gradient-Free Adversarial Attacks for Bayesian Neural Networks

2020-12-23pproximateinference AABI Symposium 2021Code Available0· sign in to hype

Matthew Yuan, Matthew Wicker, Luca Laurenti

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The existence of adversarial examples underscores the importance of understanding the robustness of machine learning models. Bayesian neural networks (BNNs), due to their calibrated uncertainty, have been shown to posses favorable adversarial robustness properties. However, when approximate Bayesian inference methods are employed, the adversarial robustness of BNNs is still not well understood. In this work, we employ gradient-free optimization methods in order to find adversarial examples for BNNs. In particular, we consider genetic algorithms, surrogate models, as well as zeroth order optimization methods and adapt them to the goal of finding adversarial examples for BNNs. In an empirical evaluation on the MNIST and Fashion MNIST datasets, we show that for various approximate Bayesian inference methods the usage of gradient-free algorithms can greatly improve the rate of finding adversarial examples compared to state-of-the-art gradient-based methods.

Tasks

Reproductions