SOTAVerified

Mean-Shifted Contrastive Loss for Anomaly Detection

2021-06-07Code Available1· sign in to hype

Tal Reiss, Yedid Hoshen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep anomaly detection methods learn representations that separate between normal and anomalous images. Although self-supervised representation learning is commonly used, small dataset sizes limit its effectiveness. It was previously shown that utilizing external, generic datasets (e.g. ImageNet classification) can significantly improve anomaly detection performance. One approach is outlier exposure, which fails when the external datasets do not resemble the anomalies. We take the approach of transferring representations pre-trained on external datasets for anomaly detection. Anomaly detection performance can be significantly improved by fine-tuning the pre-trained representations on the normal training images. In this paper, we first demonstrate and analyze that contrastive learning, the most popular self-supervised learning paradigm cannot be naively applied to pre-trained features. The reason is that pre-trained feature initialization causes poor conditioning for standard contrastive objectives, resulting in bad optimization dynamics. Based on our analysis, we provide a modified contrastive objective, the Mean-Shifted Contrastive Loss. Our method is highly effective and achieves a new state-of-the-art anomaly detection performance including 98.6\% ROC-AUC on the CIFAR-10 dataset.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Anomaly Detection on Unlabeled CIFAR-10 vs LSUN (Fix)MeanShiftedROC-AUC92.6Unverified
MVTec ADMean-Shifted Contrastive LossDetection AUROC87.2Unverified
One-class CIFAR-10Mean-Shifted Contrastive LossAUROC98.6Unverified
One-class CIFAR-100Mean-Shifted Contrastive LossAUROC96.5Unverified
Unlabeled CIFAR-10 vs CIFAR-100MeanShiftedAUROC90Unverified

Reproductions