SOTAVerified

Detection of Adversarial Examples in NLP: Benchmark and Baseline via Robust Density Estimation

2021-11-16ACL ARR November 2021Code Available0· sign in to hype

Anonymous

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years. As a counter measure, adversarial defense has been explored, but relatively little efforts have been made to detect adversarial examples. However, detecting adversarial examples in NLP may be crucial for automated task (e.g. review sentiment analysis) that wishes to amass information about a certain population and additionally be a step towards a robust defense system. To this end, we release a dataset for four popular attack methods on three datasets and four NLP models to encourage further research in this field. Along with it, we propose a competitive baseline based on density estimation that has the highest auc on 21 out of 22 dataset-attack-model combinations.https://github.com/anoymous92874838/text-adv-detection

Tasks

Reproductions