SOTAVerified

MUTEN: Boosting Gradient-Based Adversarial Attacks via Mutant-Based Ensembles

2021-09-27Unverified0· sign in to hype

Yuejun Guo, Qiang Hu, Maxime Cordy, Michail Papadakis, Yves Le Traon

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which causes serious threats to security-critical applications. This motivated much research on providing mechanisms to make models more robust against adversarial attacks. Unfortunately, most of these defenses, such as gradient masking, are easily overcome through different attack means. In this paper, we propose MUTEN, a low-cost method to improve the success rate of well-known attacks against gradient-masking models. Our idea is to apply the attacks on an ensemble model which is built by mutating the original model elements after training. As we found out that mutant diversity is a key factor in improving success rate, we design a greedy algorithm for generating diverse mutants efficiently. Experimental results on MNIST, SVHN, and CIFAR10 show that MUTEN can increase the success rate of four attacks by up to 0.45.

Tasks

Reproductions