SOTAVerified

MetaFun: Meta-Learning with Iterative Functional Updates

2019-12-05ICML 2020Code Available0· sign in to hype

Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Mini-Imagenet 5-way (1-shot)MetaFun-AttentionAccuracy64.13Unverified
Mini-Imagenet 5-way (5-shot)MetaFun-AttentionAccuracy80.82Unverified
Tiered ImageNet 5-way (1-shot)MetaFun-AttentionAccuracy67.72Unverified
Tiered ImageNet 5-way (5-shot)MetaFun-KernelAccuracy83.28Unverified

Reproductions