SOTAVerified

Black-box Adversarial ML Attack on Modulation Classification

2019-08-01Unverified0· sign in to hype

Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, many deep neural networks (DNN) based modulation classification schemes have been proposed in the literature. We have evaluated the robustness of two famous such modulation classifiers (based on the techniques of convolutional neural networks and long short term memory) against adversarial machine learning attacks in black-box settings. We have used Carlini \& Wagner (C-W) attack for performing the adversarial attack. To the best of our knowledge, the robustness of these modulation classifiers has not been evaluated through C-W attack before. Our results clearly indicate that state-of-art deep machine learning-based modulation classifiers are not robust against adversarial attacks.

Tasks

Reproductions