SOTAVerified

Adversarially Robust Learning with Optimal Transport Regularized Divergences

2023-09-07Code Available0· sign in to hype

Jeremiah Birrell, Reza Ebrahimi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a new class of optimal-transport-regularized divergences, D^c, constructed via an infimal convolution between an information divergence, D, and an optimal-transport (OT) cost, C, and study their use in distributionally robust optimization (DRO). In particular, we propose the ARMOR_D methods as novel approaches to enhancing the adversarial robustness of deep learning models. These DRO-based methods are defined by minimizing the maximum expected loss over a D^c-neighborhood of the empirical distribution of the training data. Viewed as a tool for constructing adversarial samples, our method allows samples to be both transported, according to the OT cost, and re-weighted, according to the information divergence; the addition of a principled and dynamical adversarial re-weighting on top of adversarial sample transport is a key innovation of ARMOR_D. ARMOR_D can be viewed as a generalization of the best-performing loss functions and OT costs in the adversarial training literature; we demonstrate this flexibility by using ARMOR_D to augment the UDR, TRADES, and MART methods and obtain improved performance on CIFAR-10 and CIFAR-100 image recognition. Specifically, augmenting with ARMOR_D leads to 1.9\% and 2.1\% improvement against AutoAttack, a powerful ensemble of adversarial attacks, on CIFAR-10 and CIFAR-100 respectively. To foster reproducibility, we made the code accessible at https://github.com/star-ailab/ARMOR.

Tasks

Reproductions