On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Consider the task of learning a hypothesis class H in the presence of an adversary that can replace up to an fraction of the examples in the training set with arbitrary adversarial examples. The adversary aims to fail the learner on a particular target test point x which is known to the adversary but not to the learner. In this work we aim to characterize the smallest achievable error =() by the learner in the presence of such an adversary in both realizable and agnostic settings. We fully achieve this in the realizable setting, proving that =(VC(H) ), where VC(H) is the VC dimension of H. Remarkably, we show that the upper bound can be attained by a deterministic learner. In the agnostic setting we reveal a more elaborate landscape: we devise a deterministic learner with a multiplicative regret guarantee of COPT + O(VC(H) ), where C > 1 is a universal numerical constant. We complement this by showing that for any deterministic learner there is an attack which worsens its error to at least 2 OPT. This implies that a multiplicative deterioration in the regret is unavoidable in this case. Finally, the algorithms we develop for achieving the optimal rates are inherently improper. Nevertheless, we show that for a variety of natural concept classes, such as linear classifiers, it is possible to retain the dependence =_H() by a proper algorithm in the realizable setting. Here _H conceals a polynomial dependence on VC(H).