SOTAVerified

Pyramid Adversarial Training Improves ViT Performance

2021-11-30CVPR 2022Code Available0· sign in to hype

Charles Herrmann, Kyle Sargent, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, Deqing Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Aggressive data augmentation is a key component of the strong generalization capabilities of Vision Transformer (ViT). One such data augmentation technique is adversarial training (AT); however, many prior works have shown that this often results in poor clean accuracy. In this work, we present pyramid adversarial training (PyramidAT), a simple and effective technique to improve ViT's overall performance. We pair it with a "matched" Dropout and stochastic depth regularization, which adopts the same Dropout and stochastic depth configuration for the clean and adversarial samples. Similar to the improvements on CNNs by AdvProp (not directly applicable to ViT), our pyramid adversarial training breaks the trade-off between in-distribution accuracy and out-of-distribution robustness for ViT and related architectures. It leads to 1.82% absolute improvement on ImageNet clean accuracy for the ViT-B model when trained only on ImageNet-1K data, while simultaneously boosting performance on 7 ImageNet robustness metrics, by absolute numbers ranging from 1.76% to 15.68%. We set a new state-of-the-art for ImageNet-C (41.42 mCE), ImageNet-R (53.92%), and ImageNet-Sketch (41.04%) without extra data, using only the ViT-B/16 backbone and our pyramid adversarial training. Our code is publicly available at pyramidat.github.io.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet-APyramid Adversarial Training Improves ViT (Im21k)Top-1 accuracy %62.44Unverified
ImageNet-APyramid Adversarial Training Improves ViT (384x384)Top-1 accuracy %36.41Unverified
ImageNet-CPyramid Adversarial Training Improves ViT (Im21k)mean Corruption Error (mCE)36.8Unverified
ImageNet-CPyramid Adversarial Training Improves ViTmean Corruption Error (mCE)41.42Unverified
ImageNet-RPyramid Adversarial Training Improves ViT (Im21k)Top-1 Error Rate42.16Unverified
ImageNet-RPyramid Adversarial Training Improves ViTTop-1 Error Rate46.08Unverified
ImageNet-SketchPyramid Adversarial Training Improves ViT (Im21k)Top-1 accuracy46.03Unverified
ImageNet-SketchPyramid Adversarial Training Improves ViTTop-1 accuracy41.04Unverified

Reproductions