SOTAVerified

Affine-Invariant Robust Training

2020-10-08Unverified0· sign in to hype

Oriol Barbany Mayor

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The field of adversarial robustness has attracted significant attention in machine learning. Contrary to the common approach of training models that are accurate in average case, it aims at training models that are accurate for worst case inputs, hence it yields more robust and reliable models. Put differently, it tries to prevent an adversary from fooling a model. The study of adversarial robustness is largely focused on _p-bounded adversarial perturbations, i.e. modifications of the inputs, bounded in some _p norm. Nevertheless, it has been shown that state-of-the-art models are also vulnerable to other more natural perturbations such as affine transformations, which were already considered in machine learning within data augmentation. This project reviews previous work in spatial robustness methods and proposes evolution strategies as zeroth order optimization algorithms to find the worst affine transforms for each input. The proposed method effectively yields robust models and allows introducing non-parametric adversarial perturbations.

Tasks

Reproductions