SOTAVerified

Minority Reports Defense: Defending Against Adversarial Patches

2020-04-28Unverified0· sign in to hype

Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, David Wagner

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.

Tasks

Reproductions