Generalization Properties of Adversarial Training for _0-Bounded Adversarial Attacks
Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We have widely observed that neural networks are vulnerable to small additive perturbations to the input causing misclassification. In this paper, we focus on the _0-bounded adversarial attacks, and aim to theoretically characterize the performance of adversarial training for an important class of truncated classifiers. Such classifiers are shown to have strong performance empirically, as well as theoretically in the Gaussian mixture model, in the _0-adversarial setting. The main contribution of this paper is to prove a novel generalization bound for the binary classification setting with _0-bounded adversarial perturbation that is distribution-independent. Deriving a generalization bound in this setting has two main challenges: (i) the truncated inner product which is highly non-linear; and (ii) maximization over the _0 ball due to adversarial training is non-convex and highly non-smooth. To tackle these challenges, we develop new coding techniques for bounding the combinatorial dimension of the truncated hypothesis class.