SOTAVerified

Prototype Rectification for Few-Shot Learning

2019-11-25ECCV 2020Code Available0· sign in to hype

Jinlu Liu, Liang Song, Yongqiang Qin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Few-shot learning requires to recognize novel classes with scarce labeled data. Prototypical network is useful in existing researches, however, training on narrow-size distribution of scarce data usually tends to get biased prototypes. In this paper, we figure out two key influencing factors of the process: the intra-class bias and the cross-class bias. We then propose a simple yet effective approach for prototype rectification in transductive setting. The approach utilizes label propagation to diminish the intra-class bias and feature shifting to diminish the cross-class bias. We also conduct theoretical analysis to derive its rationality as well as the lower bound of the performance. Effectiveness is shown on three few-shot benchmarks. Notably, our approach achieves state-of-the-art performance on both miniImageNet (70.31% on 1-shot and 81.89% on 5-shot) and tieredImageNet (78.74% on 1-shot and 86.92% on 5-shot).

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Dirichlet CUB-200 (5-way, 1-shot)BDCSPN1:1 Accuracy74.5Unverified
Dirichlet CUB-200 (5-way, 5-shot)BDCSPN1:1 Accuracy87.1Unverified
Dirichlet Mini-Imagenet (5-way, 1-shot)BD-CSPN1:1 Accuracy67Unverified
Dirichlet Mini-Imagenet (5-way, 5-shot)BDCSPN1:1 Accuracy80.2Unverified
Dirichlet Tiered-Imagenet (5-way, 1-shot)BDCSPN1:1 Accuracy74.1Unverified
Dirichlet Tiered-Imagenet (5-way, 5-shot)BDCSPN1:1 Accuracy84.8Unverified
Mini-ImageNet - 1-Shot LearningBD-CSPNAccuracy70.31Unverified

Reproductions