SOTAVerified

Class-Conditioned Transformation for Enhanced Robust Image Classification

2023-03-27Code Available0· sign in to hype

Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad, Alex M. Bronstein

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Robust classification methods predominantly concentrate on algorithms that address a specific threat model, resulting in ineffective defenses against other threat models. Real-world applications are exposed to this vulnerability, as malicious attackers might exploit alternative threat models. In this work, we propose a novel test-time threat model agnostic algorithm that enhances Adversarial-Trained (AT) models. Our method operates through COnditional image transformation and DIstance-based Prediction (CODIP) and includes two main steps: First, we transform the input image into each dataset class, where the input image might be either clean or attacked. Next, we make a prediction based on the shortest transformed distance. The conditional transformation utilizes the perceptually aligned gradients property possessed by AT models and, as a result, eliminates the need for additional models or additional training. Moreover, it allows users to choose the desired balance between clean and robust accuracy without training. The proposed method achieves state-of-the-art results demonstrated through extensive experiments on various models, AT methods, datasets, and attack types. Notably, applying CODIP leads to substantial robust accuracy improvement of up to +23\%, +20\%, +26\%, and +22\% on CIFAR10, CIFAR100, ImageNet and Flowers datasets, respectively.

Tasks

Reproductions