SOTAVerified

Outlier Exposure with Confidence Control for Out-of-Distribution Detection

2019-06-08Code Available0· sign in to hype

Aristotelis-Angelos Papadopoulos, Mohammad Reza Rajati, Nazim Shaikh, Jiamian Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep neural networks have achieved great success in classification tasks during the last years. However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efficiently detect out-of-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes. We propose a novel loss function that gives rise to a novel method, Outlier Exposure with Confidence Control (OECC), which achieves superior results in OOD detection with OE both on image and text classification tasks without requiring access to OOD samples. Additionally, we experimentally show that the combination of OECC with state-of-the-art post-training OOD detection methods, like the Mahalanobis Detector (MD) and the Gramian Matrices (GM) methods, further improves their performance in the OOD detection task, demonstrating the potential of combining training and post-training methods for OOD detection.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
20 Newsgroups2-layer GRUs + OECCAUROC99.18Unverified
CIFAR-10ResNet 34 + OECC+GMAUROC99.7Unverified
CIFAR-100WRN 40-2 + OECCFPR9528.89Unverified
CIFAR-100 vs CIFAR-10WRN 40-2 + OECCAUROC78.7Unverified
CIFAR-100 vs SVHNOECC + MDAUROC98.7Unverified
CIFAR-10 vs CIFAR-100Wide 40-2 + OECCAUROC94.9Unverified
ImageNet dogs vs ImageNet non-dogsResNet 34 + OEAUROC92.5Unverified
MS-1M vs. IJB-CResNeXt 50 + OEAUROC52.6Unverified

Reproductions