SOTAVerified

Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

2021-01-04Code Available0· sign in to hype

Yanghao Zhang, Fu Wang, Wenjie Ruan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied. In this paper, we propose a Half-Neighbor Masked Projected Gradient Descent (HNM-PGD) based attack, which can generate strong perturbation to fool different kinds of detectors under strict constraints. We also applied the proposed HNM-PGD attack in the CIKM 2020 AnalytiCup Competition, which was ranked within the top 1% on the leaderboard. We release the code at https://github.com/YanghaoZYH/HNM-PGD.

Tasks

Reproductions