SOTAVerified

Sparse DNNs with Improved Adversarial Robustness

2018-10-23NeurIPS 2018Unverified0· sign in to hype

Yiwen Guo, Chao Zhang, Chang-Shui Zhang, Yurong Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications. By converting dense models into sparse ones, pruning appears to be a promising solution to reducing the computation/memory cost. This paper studies classification models, especially DNN-based ones, to demonstrate that there exists intrinsic relationships between their sparsity and adversarial robustness. Our analyses reveal, both theoretically and empirically, that nonlinear DNN-based classifiers behave differently under l_2 attacks from some linear ones. We further demonstrate that an appropriately higher model sparsity implies better robustness of nonlinear DNNs, whereas over-sparsified models can be more difficult to resist adversarial examples.

Tasks

Reproductions