A Geometry-Based View of Mahalanobis OOD Detection
Denis Janiak, Jakub Binkowski, Tomasz Kajdanowicz
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Out-of-distribution (OOD) detection is critical for reliable deployment of vision models. Mahalanobis-based detectors remain strong baselines, yet their performance varies widely across modern pretrained representations, and it is unclear which properties of a feature space cause these methods to succeed or fail. We conduct a large-scale study across diverse foundation-model backbones and Mahalanobis variants. First, we show that Mahalanobis-style OOD detection is not universally reliable: performance is highly representation-dependent and can shift substantially with pretraining data and fine-tuning regimes. Second, we link this variability to in-distribution geometry and identify a two-term ID summary that consistently tracks Mahalanobis OOD behavior across detectors: within-class spectral structure and local intrinsic dimensionality. Finally, we treat normalization as a geometric control mechanism and introduce radially scaled _2 normalization, ϕ_β(z)=z/\|z\|^β, which preserves directions while contracting or expanding feature radii. Varying β changes the radii while preserving directions, so the same quadratic detector sees a different ID geometry. We choose β from ID-only geometry signals and typically outperform fixed normalization baselines.