SOTAVerified

Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints

2021-02-25NeurIPS 2021Code Available2· sign in to hype

Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model. In this work, we overcome these limitations by proposing a fast minimum-norm (FMN) attack that works with different _p-norm perturbation models (p=0, 1, 2, ), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the sample misclassified with maximum confidence within an _p-norm constraint of size , while adapting to minimize the distance of the current sample to the decision boundary. Extensive experiments show that FMN significantly outperforms existing attacks in terms of convergence speed and computation time, while reporting comparable or even smaller perturbation sizes.

Tasks

Reproductions