SOTAVerified

Generate More Imperceptible Adversarial Examples for Object Detection

2021-06-18ICML Workshop AML 2021Unverified0· sign in to hype

Siyuan Liang, Xingxing Wei, Xiaochun Cao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Object detection methods based on deep neural networks are vulnerable to adversarial examples. The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations. In response to these problems, we proposed a more imperceptible attack(MI attack) with a stopping condition of feature destruction and a noise cancellation mechanism. Finally, the generator generates subtle adversarial perturbations, which can not only attack the object detection models that are based on proposal and regression but also boost the training speed by 4-6 times. Experiments show that the MI method has achieved state-of-the-art attack performance in the large datasets PASCAL VOC.

Tasks

Reproductions