SOTAVerified

Developing and Defeating Adversarial Examples

2020-08-23Code Available1· sign in to hype

Ian McDiarmid-Sterling, Allan Moser

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Breakthroughs in machine learning have resulted in state-of-the-art deep neural networks (DNNs) performing classification tasks in safety-critical applications. Recent research has demonstrated that DNNs can be attacked through adversarial examples, which are small perturbations to input data that cause the DNN to misclassify objects. The proliferation of DNNs raises important safety concerns about designing systems that are robust to adversarial examples. In this work we develop adversarial examples to attack the Yolo V3 object detector [1] and then study strategies to detect and neutralize these examples. Python code for this project is available at https://github.com/ianmcdiarmidsterling/adversarial

Tasks

Reproductions