SOTAVerified

On the Adversarial Transferability of ConvMixer Models

2022-09-19Unverified0· sign in to hype

Ryota Iijima, Miki Tanaka, Isao Echizen, Hitoshi Kiya

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In this paper, we investigate the property of adversarial transferability between models including ConvMixer, which is an isotropic network, for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method called AutoAttack. In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.

Tasks

Reproductions