SOTAVerified

Computational Asymmetries in Robust Classification

2023-06-25Code Available0· sign in to hype

Samuele Marro, Michele Lombardi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is NP-hard, ensuring their robustness at training time is ^2_P-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is NP-hard, while attacking it is _2^P-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.

Tasks

Reproductions