Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Zuxuan Wu, Ser-Nam Lim, Larry Davis, Tom Goldstein
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/zxwu/adv_cloakpytorch★ 0
- github.com/anonymous1125/patnet_datasetnone★ 0
Abstract
We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.