SOTAVerified

Adversarial and Clean Data Are Not Twins

2017-04-17Code Available0· sign in to hype

Zhitao Gong, Wenlu Wang, Wei-Shinn Ku

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Adversarial attack has cast a shadow on the massive success of deep neural networks. Despite being almost visually identical to the clean data, the adversarial images can fool deep neural networks into wrong predictions with very high confidence. In this paper, however, we show that we can build a simple binary classifier separating the adversarial apart from the clean data with accuracy over 99%. We also empirically show that the binary classifier is robust to a second-round adversarial attack. In other words, it is difficult to disguise adversarial samples to bypass the binary classifier. Further more, we empirically investigate the generalization limitation which lingers on all current defensive methods, including the binary classifier approach. And we hypothesize that this is the result of intrinsic property of adversarial crafting algorithms.

Tasks

Reproductions