SOTAVerified

Evaluating Adversarial Attacks on Traffic Sign Classifiers beyond Standard Baselines

2024-12-12Code Available0· sign in to hype

Svetlana Pavlitska, Leopold Müller, J. Marius Zöllner

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Adversarial attacks on traffic sign classification models were among the first successfully tried in the real world. Since then, the research in this area has been mainly restricted to repeating baseline models, such as LISA-CNN or GTSRB-CNN, and similar experiment settings, including white and black patches on traffic signs. In this work, we decouple model architectures from the datasets and evaluate on further generic models to make a fair comparison. Furthermore, we compare two attack settings, inconspicuous and visible, which are usually regarded without direct comparison. Our results show that standard baselines like LISA-CNN or GTSRB-CNN are significantly more susceptible than the generic ones. We, therefore, suggest evaluating new attacks on a broader spectrum of baselines in the future. Our code is available at https://github.com/KASTEL-MobilityLab/attacks-on-traffic-sign-recognition/.

Tasks

Reproductions