SOTAVerified

Towards Frequency-Based Explanation for Robust CNN

2020-05-06Code Available0· sign in to hype

Zifan Wang, Yilin Yang, Ankit Shrivastava, Varun Rawal, Zihao Ding

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification analysis about the contribution of different frequency components toward the model's prediction. We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions.

Tasks

Reproductions