SOTAVerified

Detection of Word Adversarial Examples in NLP: Benchmark and Baseline via Robust Density Estimation

2022-01-16ACL ARR January 2022Code Available0· sign in to hype

Anonymous

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years. As a countermeasure, adversarial defense has been explored, but relatively few efforts have been made to detect adversarial examples. However, detecting adversarial examples in NLP may be crucial for automated task (e.g. review sentiment analysis) that wishes to amass information about a certain population and additionally be a step towards a robust defense system. To this end, we release a dataset for four popular attack methods on four datasets and four NLP models to encourage further research in this field. Along with it, we propose a competitive baseline based on density estimation that has the highest auc on 29 out of 30 dataset-attack-model combinations.https://github.com/anoymous92874838/text-adv-detection

Tasks

Reproductions