Augmented Lagrangian Adversarial Attacks
Jérôme Rony, Eric Granger, Marco Pedersoli, Ismail Ben Ayed
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jeromerony/augmented_lagrangian_adversarial_attacksOfficialIn paperpytorch★ 24
- github.com/jeromerony/adversarial-librarypytorch★ 166
Abstract
Adversarial attack algorithms are dominated by penalty methods, which are slow in practice, or more efficient distance-customized methods, which are heavily tailored to the properties of the distance considered. We propose a white-box attack algorithm to generate minimally perturbed adversarial examples based on Augmented Lagrangian principles. We bring several algorithmic modifications, which have a crucial effect on performance. Our attack enjoys the generality of penalty methods and the computational efficiency of distance-customized algorithms, and can be readily used for a wide set of distances. We compare our attack to state-of-the-art methods on three datasets and several models, and consistently obtain competitive performances with similar or lower computational complexity.