SOTAVerified

Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients

2020-01-01ICLR 2020Unverified0· sign in to hype

Haohan Wang, Xindi Wu, Songwei Ge, Zachary C. Lipton, Eric P. Xing

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans are more sensitive to the lower-frequency (larger-scale) patterns we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel. We apply our regularization onto several popular training methods, demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness. Further, building on recent work establishing connections between adversarial robustness and interpretability, we show that our method appears to give more perceptually-aligned gradients.

Tasks

Reproductions