SOTAVerified

Improving out-of-distribution generalization via multi-task self-supervised pretraining

2020-03-30Unverified0· sign in to hype

Isabela Albuquerque, Nikhil Naik, Junnan Li, Nitish Keskar, Richard Socher

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, supervised learning for domain generalization in computer vision. We introduce a new self-supervised pretext task of predicting responses to Gabor filter banks and demonstrate that multi-task learning of compatible pretext tasks improves domain generalization performance as compared to training individual tasks alone. Features learnt through self-supervision obtain better generalization to unseen domains when compared to their supervised counterpart when there is a larger domain shift between training and test distributions and even show better localization ability for objects of interest. Self-supervised feature representations can also be combined with other domain generalization methods to further boost performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PACSRotation+Gabor+DeepCluster (Alexnet)Average Accuracy69.32Unverified

Reproductions