SOTAVerified

A Baseline for Few-Shot Image Classification

2019-09-06ICLR 2020Code Available0· sign in to hype

Guneet S. Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Dirichlet CUB-200 (5-way, 1-shot)Entropy Minimization1:1 Accuracy67.5Unverified
Dirichlet CUB-200 (5-way, 5-shot)Entropy Minimization1:1 Accuracy82.9Unverified
Dirichlet Mini-Imagenet (5-way, 1-shot)Entropy Minimization1:1 Accuracy58.5Unverified
Dirichlet Mini-Imagenet (5-way, 5-shot)Entropy Minimization1:1 Accuracy74.8Unverified
Dirichlet Tiered-Imagenet (5-way, 1-shot)Entropy Minimization1:1 Accuracy61.2Unverified
Dirichlet Tiered-Imagenet (5-way, 5-shot)Entropy Minimization1:1 Accuracy75.5Unverified

Reproductions