SOTAVerified

Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification

2020-06-29Unverified0· sign in to hype

Chen Dan, Yuting Wei, Pradeep Ravikumar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Adversarial robustness has become a fundamental requirement in modern machine learning applications. Yet, there has been surprisingly little statistical understanding so far. In this paper, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by schmidt2018adversarially. The results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting. For the Gaussian mixtures with AdvSNR value of r, we establish an excess risk lower bound of order (e^-(18+o(1)) r^2 dn) and design a computationally efficient estimator that achieves this optimal rate. Our results built upon minimal set of assumptions while cover a wide spectrum of adversarial perturbations including _p balls for any p 1.

Tasks

Reproductions