SOTAVerified

Adversarial Examples from Cryptographic Pseudo-Random Generators

2018-11-15Unverified0· sign in to hype

Sébastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption.

Tasks

Reproductions