SOTAVerified

Improved robustness to adversarial examples using Lipschitz regularization of the loss

2018-10-01ICLR 2019Code Available0· sign in to hype

Chris Finlay, Adam Oberman, Bilal Abbasi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state-of-the-art result in the _2 norm on CIFAR-10. We obtain verifiable average case and worst case robustness guarantees, based on the expected and maximum values of the norm of the gradient of the loss. We interpret adversarial training as Total Variation Regularization, which is a fundamental tool in mathematical image processing, and WCAT as Lipschitz regularization.

Tasks

Reproductions