Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang, Yixuan Li, R. Srikant
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/odinOfficialIn paperpytorch★ 0
- github.com/remic-othr/openmiboodpytorch★ 45
- github.com/jun-cen/unified_open_set_recognitionpytorch★ 35
- github.com/kobybibas/pnml_ood_detectionpytorch★ 25
- github.com/kingjamessong/rankfeatpytorch★ 20
- github.com/guyAmit/GLODpytorch★ 16
- github.com/ShiyuLiang/odin-pytorchpytorch★ 0
- github.com/ericjang/odinpytorch★ 0
- github.com/JoonHyung-Park/ODINpytorch★ 0
Abstract
We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ImageNet dogs vs ImageNet non-dogs | ResNet 34 + ODIN | AUROC | 90.8 | — | Unverified |
| MS-1M vs. IJB-C | ResNeXt 50 + ODIN | AUROC | 61.3 | — | Unverified |