SOTAVerified

AFD: Mitigating Feature Gap for Adversarial Robustness by Feature Disentanglement

2024-01-26Code Available0· sign in to hype

Nuoyan Zhou, Dawei Zhou, Decheng Liu, Nannan Wang, Xinbo Gao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Adversarial fine-tuning methods enhance adversarial robustness via fine-tuning the pre-trained model in an adversarial training manner. However, we identify that some specific latent features of adversarial samples are confused by adversarial perturbation and lead to an unexpectedly increasing gap between features in the last hidden layer of natural and adversarial samples. To address this issue, we propose a disentanglement-based approach to explicitly model and further remove the specific latent features. We introduce a feature disentangler to separate out the specific latent features from the features of the adversarial samples, thereby boosting robustness by eliminating the specific latent features. Besides, we align clean features in the pre-trained model with features of adversarial samples in the fine-tuned model, to benefit from the intrinsic features of natural samples. Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.

Tasks

Reproductions