PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation
Tal Reiss, Niv Cohen, Liron Bergman, Yedid Hoshen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/talreiss/PANDAOfficialIn paperpytorch★ 96
Abstract
Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pretrained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pretrained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods. In order to obtain further performance gains in anomaly detection, we adapt pretrained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. Our method, PANDA, outperforms the state-of-the-art in the OCC, outlier exposure and anomaly segmentation settings by large margins.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Cats and Dogs | PANDA | ROC AUC | 97.3 | — | Unverified |
| Cats and Dogs | Self-Supervised DeepSVDD | ROC AUC | 50.5 | — | Unverified |
| Cats and Dogs | Self-Supervised One-class SVM, RBF kernel | ROC AUC | 51.7 | — | Unverified |
| Cats and Dogs | PANDA-OE | ROC AUC | 94.5 | — | Unverified |
| DIOR | Self-Supervised DeepSVDD | ROC AUC | 70 | — | Unverified |
| DIOR | PANDA-OE | ROC AUC | 95.9 | — | Unverified |
| DIOR | PANDA | ROC AUC | 94.3 | — | Unverified |
| DIOR | Self-Supervised One-class SVM, RBF kernel | ROC AUC | 70.7 | — | Unverified |
| Fashion-MNIST | PANDA | ROC AUC | 95.6 | — | Unverified |
| Fashion-MNIST | Self-Supervised DeepSVDD | ROC AUC | 84.8 | — | Unverified |
| Fashion-MNIST | PANDA-OE | ROC AUC | 91.8 | — | Unverified |
| Fashion-MNIST | Self-Supervised One-class SVM, RBF kernel | ROC AUC | 92.8 | — | Unverified |
| Hyper-Kvasir Dataset | PANDA | AUC | 0.94 | — | Unverified |
| One-class CIFAR-10 | Self-Supervised DeepSVDD | AUROC | 64.8 | — | Unverified |
| One-class CIFAR-10 | PANDA-OE | AUROC | 98.9 | — | Unverified |
| One-class CIFAR-10 | PANDA | AUROC | 96.2 | — | Unverified |
| One-class CIFAR-10 | Self-Supervised One-class SVM, RBF kernel | AUROC | 64.7 | — | Unverified |
| One-class CIFAR-100 | Self-Supervised Multi-Head RotNet | AUROC | 80.1 | — | Unverified |
| One-class CIFAR-100 | Self-Supervised One-class SVM, RBF kernel | AUROC | 62.6 | — | Unverified |
| One-class CIFAR-100 | PANDA-OE | AUROC | 97.3 | — | Unverified |
| One-class CIFAR-100 | PANDA | AUROC | 94.1 | — | Unverified |
| One-class CIFAR-100 | Self-Supervised DeepSVDD | AUROC | 67 | — | Unverified |