SOTAVerified

Delving into Out-of-Distribution Detection with Vision-Language Representations

2022-11-24Code Available1· sign in to hype

Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, Yixuan Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 13.1% (AUROC). Code is available at https://github.com/deeplearning-wisc/MCM.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet-1k vs Curated OODs (avg.)MCM (CLIP-L)FPR9538.17Unverified
ImageNet-1k vs iNaturalistMCM (CLIP-B)AUROC94.61Unverified
ImageNet-1k vs iNaturalistMCM (CLIP-L)AUROC94.95Unverified
ImageNet-1k vs PlacesMCM (CLIP-L)FPR9535.42Unverified
ImageNet-1k vs PlacesMCM (CLIP-B)FPR9544.69Unverified
ImageNet-1k vs SUNMCM (CLIP-L)FPR9529Unverified
ImageNet-1k vs SUNMCM (CLIP-B)FPR9537.59Unverified
ImageNet-1k vs TexturesMCM (CLIP-L)AUROC84.88Unverified
ImageNet-1k vs TexturesMCM (CLIP-B)AUROC86.11Unverified

Reproductions