A Note on Implementation Errors in Recent Adaptive Attacks Against Multi-Resolution Self-Ensembles
Stanislav Fort
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This note documents an implementation issue in recent adaptive attacks (Zhang et al. [2024]) against the multi-resolution self-ensemble defense (Fort and Lakshminarayanan [2024]). The implementation allowed adversarial perturbations to exceed the standard L_ = 8/255 bound by up to a factor of 20, reaching magnitudes of up to L_ = 160/255. When attacks are properly constrained within the intended bounds, the defense maintains non-trivial robustness. Beyond highlighting the importance of careful validation in adversarial machine learning research, our analysis reveals an intriguing finding: properly bounded adaptive attacks against strong multi-resolution self-ensembles often align with human perception, suggesting the need to reconsider how we measure adversarial robustness.