Provable robustness against all adversarial l_p-perturbations for p 1
2019-05-27ICLR 2020Code Available0· sign in to hype
Francesco Croce, Matthias Hein
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/fra31/mmr-universalOfficialIn paperpytorch★ 0
Abstract
In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific l_p-perturbation models have been developed, we show that they do not come with any guarantee against other l_q-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt l_1- and l_-perturbations and show how that leads to the first provably robust models wrt any l_p-norm for p 1.