SOTAVerified

Adaptive Modeling Against Adversarial Attacks

2021-12-23Code Available0· sign in to hype

Zhiwen Yan, Teck Khim Ng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Adversarial training, the process of training a deep learning model with adversarial data, is one of the most successful adversarial defense methods for deep learning models. We have found that the robustness to white-box attack of an adversarially trained model can be further improved if we fine tune this model in inference stage to adapt to the adversarial input, with the extra information in it. We introduce an algorithm that "post trains" the model at inference stage between the original output class and a "neighbor" class, with existing training data. The accuracy of pre-trained Fast-FGSM CIFAR10 classifier base model against white-box projected gradient attack (PGD) can be significantly improved from 46.8% to 64.5% with our algorithm.

Tasks

Reproductions