SOTAVerified

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

2019-06-28NeurIPS 2019Code Available0· sign in to hype

Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating or reducing the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102ROT+TransROC-AUC86.3Unverified
Anomaly Detection on Unlabeled ImageNet-30 vs CUB-200ROT+TransROC-AUC74.5Unverified
One-class CIFAR-10SSOODAUROC90.1Unverified
One-class ImageNet-30RotNet + Self-AttentionAUROC81.6Unverified
One-class ImageNet-30RotNet + TranslationAUROC77.9Unverified
One-class ImageNet-30Supervised (OE)AUROC56.1Unverified
One-class ImageNet-30RotNetAUROC65.3Unverified
One-class ImageNet-30RotNet + Translation + Self-Attention + ResizeAUROC85.7Unverified
One-class ImageNet-30RotNet + Translation + Self-AttentionAUROC84.8Unverified

Reproductions