SOTAVerified

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

2019-03-07ICLR 2020Code Available1· sign in to hype

Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, Hugo Larochelle

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models' ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Meta-Datasetfo-Proto-MAMLAccuracy63.43Unverified
Meta-DatasetFinetuneAccuracy58.76Unverified
Meta-Datasetk-NNAccuracy54.32Unverified
Meta-Dataset Rankfo-Proto-MAMLMean Rank6.65Unverified
Meta-Dataset RankFinetuneMean Rank8.7Unverified
Meta-Dataset Rankk-NNMean Rank10.85Unverified

Reproductions