A Closer Look at Few-shot Classification
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wyharveychen/CloserLookFewShotOfficialIn paperpytorch★ 0
- github.com/sicara/easy-few-shot-learningpytorch★ 1,301
- github.com/cyvius96/few-shot-meta-baselinepytorch★ 653
- github.com/yinboc/few-shot-meta-baselinepytorch★ 653
- github.com/hu-my/taskattributedistancepytorch★ 143
- github.com/anujinho/tridentpytorch★ 40
- github.com/vinuni-vishc/Few-Shot-Cosine-Transformerpytorch★ 31
- github.com/vinuni-vishc/few-shot-transformerpytorch★ 30
- github.com/mikehuisman/revisiting-learned-optimizerspytorch★ 5
- github.com/tjujianyu/rrlpytorch★ 5
Abstract
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Dirichlet CUB-200 (5-way, 1-shot) | Baseline++ | 1:1 Accuracy | 69.4 | — | Unverified |
| Dirichlet CUB-200 (5-way, 5-shot) | Baseline++ | 1:1 Accuracy | 87.5 | — | Unverified |
| Dirichlet Mini-Imagenet (5-way, 1-shot) | Baseline ++ | 1:1 Accuracy | 60.4 | — | Unverified |
| Dirichlet Mini-Imagenet (5-way, 5-shot) | Baseline++ | 1:1 Accuracy | 79.7 | — | Unverified |
| Dirichlet Tiered-Imagenet (5-way, 1-shot) | Baseline++ | 1:1 Accuracy | 68 | — | Unverified |
| Dirichlet Tiered-Imagenet (5-way, 5-shot) | Baseline++ | 1:1 Accuracy | 84.2 | — | Unverified |
| Mini-ImageNet-CUB 5-way (1-shot) | Baseline++ (Chen et al., 2019) | Accuracy | 33.04 | — | Unverified |
| Mini-ImageNet-CUB 5-way (5-shot) | Baseline++ (Chen et al., 2019) | Accuracy | 62.04 | — | Unverified |