SOTAVerified

Adversarial Robustness via Runtime Masking and Cleansing

2020-01-01ICML 2020Unverified0· sign in to hype

Yi-Hsuan Wu, Chia-Hung Yuan, Shan-Hung (Brandon) Wu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep neural networks are shown to be vulnerable to adversarial attacks. This motivates robust learning techniques, such as the adversarial training, whose goal is to learn a network that is robust against adversarial attacks. However, the sample complexity of robust learning can be significantly larger than that of “standard” learning. In this paper, we propose improving the adversarial robustness of a network by leveraging the potentially large test data seen at runtime. We devise a new defense method, called runtime masking and cleansing (RMC), that adapts the network at runtime before making a prediction to dynamically mask network gradients and cleanse the model of the non-robust features inevitably learned during the training process due to the size limit of the training set. We conduct experiments on real-world datasets and the results demonstrate the effectiveness of RMC empirically.

Tasks

Reproductions