SOTAVerified

Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Despite the success of recent deep learning techniques, they still perform poorly on adversarial examples with small perturbations. While gradient-based adversarial attack methods are well-explored in the field of computer vision, it is impractical to directly apply them in natural language processing due to the discrete nature of the text. To address the problem, we propose a unified framework to extend the existing gradient-based method to craft textual adversarial samples. In this framework, gradient-based continuous perturbations are added to the embedding layer and amplified in the forward propagation process. Then the final perturbed latent representations are decoded with a mask language model head to obtain potential adversarial samples. In this paper, we instantiate our framework with an attack algorithm named Textual Projected GradientDescent (T-PGD). We conduct comprehensive experiments to evaluate our framework by performing transfer black-box attacks on BERT, RoBERTa, and ALBERT on three benchmark datasets. Experimental results demonstrate that our method achieves an overall better performance and produces more fluent and grammatical adversarial samples compared to strong baseline methods. All the code and data will be made public.

Tasks

Reproductions