SOTAVerified

Natural Backdoor Attack on Text Data

2020-06-29Unverified0· sign in to hype

Lichao Sun

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, advanced NLP models have seen a surge in the usage of various applications. This raises the security threats of the released models. In addition to the clean models' unintentional weaknesses, i.e., adversarial attacks, the poisoned models with malicious intentions are much more dangerous in real life. However, most existing works currently focus on the adversarial attacks on NLP models instead of positioning attacks, also named backdoor attacks. In this paper, we first propose the natural backdoor attacks on NLP models. Moreover, we exploit the various attack strategies to generate trigger on text data and investigate different types of triggers based on modification scope, human recognition, and special cases. Last, we evaluate the backdoor attacks, and the results show the excellent performance of with 100\% backdoor attacks success rate and sacrificing of 0.83\% on the text classification task.

Tasks

Reproductions