SOTAVerified

Reduction of Class Activation Uncertainty with Background Information

2023-05-05Code Available1· sign in to hype

H M Dipu Kabir

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multitask learning is a popular approach to training high-performing neural networks with improved generalization. In this paper, we propose a background class to achieve improved generalization at a lower computation compared to multitask learning to help researchers and organizations with limited computation power. We also present a methodology for selecting background images and discuss potential future improvements. We apply our approach to several datasets and achieve improved generalization with much lower computation. Through the class activation mappings (CAMs) of the trained models, we observed the tendency towards looking at a bigger picture with the proposed model training methodology. Applying the vision transformer with the proposed background class, we receive state-of-the-art (SOTA) performance on CIFAR-10C, Caltech-101, and CINIC-10 datasets. Example scripts are available in the `CAM' folder of the following GitHub Repository: github.com/dipuk0506/UQ

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10CViT-L/16 (Background)Accuracy on Brightness Corrupted Images99.03Unverified

Reproductions