SOTAVerified

Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization

2023-06-05Code Available1· sign in to hype

Yibing Liu, Chris Xing Tian, Haoliang Li, Lei Ma, Shiqi Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, to characterize the relationship between neurons and OOD issues, we introduce the neuron activation coverage (NAC) -- a simple measure for neuron behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be largely separated based on the neuron behavior, which significantly eases the OOD detection problem and beats the 21 previous methods over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. Compared to prevalent InD validation criteria, we show that NAC not only can select more robust models, but also has a stronger correlation with OOD test performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet-1k vs iNaturalistNAC-UE (ResNet-50)AUROC96.52Unverified
ImageNet-1k vs OpenImage-ONAC-UE (ResNet-50)AUROC91.45Unverified
ImageNet-1k vs TexturesNAC-UE (ResNet-50)AUROC97.9Unverified

Reproductions