SOTAVerified

Falsehoods that ML researchers believe about OOD detection

2022-10-23Unverified0· sign in to hype

Andi Zhang, Damon Wischik

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

An intuitive way to detect out-of-distribution (OOD) data is via the density function of a fitted probabilistic generative model: points with low density may be classed as OOD. But this approach has been found to fail, in deep learning settings. In this paper, we list some falsehoods that machine learning researchers believe about density-based OOD detection. Many recent works have proposed likelihood-ratio-based methods to `fix' the problem. We propose a framework, the OOD proxy framework, to unify these methods, and we argue that likelihood ratio is a principled method for OOD detection and not a mere `fix'. Finally, we discuss the relationship between domain discrimination and semantics.

Tasks

Reproductions