Meta-Learning with Latent Embedding Optimization
Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/deepmind/leoOfficialIn papertf★ 0
- github.com/yinboc/few-shot-meta-baselinepytorch★ 653
- github.com/xiangyu8/PT-MAP-sfpytorch★ 24
- github.com/timchen0618/pytorch-leopytorch★ 0
- github.com/jiean001/models_m/tree/main/LEOmindspore★ 0
Abstract
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.