Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/yinboc/few-shot-meta-baselineOfficialIn paperpytorch★ 653
- github.com/cyvius96/few-shot-meta-baselineOfficialIn paperpytorch★ 653
- github.com/cvlab-stonybrook/fsl-rsvaepytorch★ 34
- github.com/mlpc-ucsd/ConstellationNetpytorch★ 14
- github.com/code-implementation1/Code5/tree/main/meta-baselinemindspore★ 0
- github.com/Mind23-2/MindCode-101/tree/main/meta-baselinemindspore★ 0
- github.com/MS-Mind/MS-Code-08/tree/main/meta-baselinemindspore★ 0
- github.com/dmcv-ecnu/MindSpore_ModelZoo/tree/main/few-shot-meta-baseline_mindsporemindspore★ 0
- github.com/Mind23-2/MindCode-3/tree/main/meta-baselinemindspore★ 0
- github.com/Mind23-2/MindCode-3/tree/main/m2detmindspore★ 0
Abstract
Meta-learning has been the most common framework for few-shot learning in recent years. It learns the model from collections of few-shot classification tasks, which is believed to have a key advantage of making the training objective consistent with the testing objective. However, some recent works report that by training for whole-classification, i.e. classification on the whole label-set, it can get comparable or even better embedding than many meta-learning algorithms. The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear. In this paper, we explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric. We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks. Our further analysis shed some light on understanding the trade-offs between the meta-learning objective and the whole-classification objective in few-shot learning.