Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples)
Woohyung Chun, Sung-Min Hong, Junho Huh, Inyup Kang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/stonylinux/mitigating_large_adversarial_perturbations_on_X-MASOfficialIn paperpytorch★ 0
Abstract
We propose the scheme that mitigates the adversarial perturbation on the adversarial example X_adv (= X , X is a benign sample) by subtracting the estimated perturbation from X + and adding to X - . The estimated perturbation comes from the difference between X_adv and its moving-averaged outcome W_avg*X_adv where W_avg is N N moving average kernel that all the coefficients are one. Usually, the adjacent samples of an image are close to each other such that we can let X W_avg*X (naming this relation after X-MAS[X minus Moving Averaged Samples]). By doing that, we can make the estimated perturbation falls within the range of . The scheme is also extended to do the multi-level mitigation by configuring the mitigated adversarial example X_adv as a new adversarial example to be mitigated. The multi-level mitigation gets X_adv closer to X with a smaller (i.e. mitigated) perturbation than original unmitigated perturbation by setting the moving averaged adversarial sample W_avg * X_adv (which has the smaller perturbation than X_adv if X W_avg*X) as the boundary condition that the multi-level mitigation cannot cross over (i.e. decreasing cannot go below and increasing cannot go beyond). With the multi-level mitigation, we can get high prediction accuracies even in the adversarial example having a large perturbation (i.e. > 16). The proposed scheme is evaluated with adversarial examples crafted by the FGSM (Fast Gradient Sign Method) based attacks on ResNet-50 trained with ImageNet dataset.