SOTAVerified

Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack

2025-06-30Code Available0· sign in to hype

Arnisa Fazla, Lucas Krauter, David Guzman Piedrahita, Andrianos Michail

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We extend BeamAttack, an adversarial attack algorithm designed to evaluate the robustness of text classification systems through word-level modifications guided by beam search. Our extensions include support for word deletions and the option to skip substitutions, enabling the discovery of minimal modifications that alter model predictions. We also integrate LIME to better prioritize word replacements. Evaluated across multiple datasets and victim models (BiLSTM, BERT, and adversarially trained RoBERTa) within the BODEGA framework, our approach achieves over a 99\% attack success rate while preserving the semantic and lexical similarity of the original texts. Through both quantitative and qualitative analysis, we highlight BeamAttack's effectiveness and its limitations. Our implementation is available at https://github.com/LucK1Y/BeamAttack

Tasks

Reproductions