SOTAVerified

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

2023-02-10Code Available0· sign in to hype

Piotr Gaiński, Klaudia Bałazy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities. Our algorithm mitigates the gap between adversarial loss for continuous and discrete text representations by performing multi-step quantization in a quantization-compensation loop. Experiments show that our method significantly outperforms other approaches on various natural language processing (NLP) tasks.

Tasks

Reproductions