SOTAVerified

Optimizing Relevance Maps of Vision Transformers Improves Robustness

2022-06-02Code Available1· sign in to hype

Hila Chefer, Idan Schwartz, Lior Wolf

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

It has been observed that visual classification models often rely mostly on the image background, neglecting the foreground, which hurts their robustness to distribution changes. To alleviate this shortcoming, we propose to monitor the model's relevancy signal and manipulate it such that the model is focused on the foreground object. This is done as a finetuning step, involving relatively few samples consisting of pairs of images and their associated foreground masks. Specifically, we encourage the model's relevancy map (i) to assign lower relevance to background regions, (ii) to consider as much information as possible from the foreground, and (iii) we encourage the decisions to have high confidence. When applied to Vision Transformer (ViT) models, a marked improvement in robustness to domain shifts is observed. Moreover, the foreground masks can be obtained automatically, from a self-supervised variant of the ViT model itself; therefore no additional supervision is required.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ObjectNetAR-L (Opt Relevance)Top-1 Accuracy52Unverified
ObjectNetAR-B (Opt Relevance)Top-1 Accuracy47.1Unverified
ObjectNetAR-LTop-1 Accuracy46.5Unverified
ObjectNetViT-L (Opt Relevance)Top-1 Accuracy43.2Unverified
ObjectNetViT-B (Opt Relevance)Top-1 Accuracy42.2Unverified
ObjectNetAR-BTop-1 Accuracy41.4Unverified
ObjectNetAR-S (Opt Relevance)Top-1 Accuracy39.3Unverified
ObjectNetViT-LTop-1 Accuracy37.4Unverified
ObjectNetDeiT-L (Opt Relevance)Top-1 Accuracy36.3Unverified
ObjectNetViT-BTop-1 Accuracy35.1Unverified
ObjectNetAR-STop-1 Accuracy34.3Unverified
ObjectNetDeiT-S (Opt Relevance)Top-1 Accuracy31.6Unverified
ObjectNetDeiT-LTop-1 Accuracy31.4Unverified
ObjectNetDeiT-STop-1 Accuracy28.3Unverified

Reproductions