SOTAVerified

Exploiting the Sensitivity of L_2 Adversarial Examples to Erase-and-Restore

2020-01-01Unverified0· sign in to hype

Fei Zuo, Qiang Zeng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-network-based image classifiers. L_2 adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-L_2 AEs is still an open question. We find that, by randomly erasing some pixels in an L_2 AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of L_2 attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of L_2 AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-L_2 AEs can accurately detect AEs generated using another L_2 attack method. More importantly, our approach demonstrates strong resilience to adaptive L_2 attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

Tasks

Reproductions