SOTAVerified

Robust Attacks against Multiple Classifiers

2019-06-06Code Available0· sign in to hype

Juan C. Perdomo, Yaron Singer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We address the challenge of designing optimal adversarial noise algorithms for settings where a learner has access to multiple classifiers. We demonstrate how this problem can be framed as finding strategies at equilibrium in a two-player, zero-sum game between a learner and an adversary. In doing so, we illustrate the need for randomization in adversarial attacks. In order to compute Nash equilibrium, our main technical focus is on the design of best response oracles that can then be implemented within a Multiplicative Weights Update framework to boost deterministic perturbations against a set of models into optimal mixed strategies. We demonstrate the practical effectiveness of our approach on a series of image classification tasks using both linear classifiers and deep neural networks.

Tasks

Reproductions