Generalized Inner Loop Meta-Learning
Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/learnables/learn2learnOfficialIn paperpytorch★ 2,878
- github.com/facebookresearch/higherOfficialIn paperpytorch★ 1,628
- github.com/neitzal/learning-to-distill-trajectoriespytorch★ 0
Abstract
Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem. In this paper, we give a formalization of this shared pattern, which we call GIMLI, prove its general requirements, and derive a general-purpose algorithm for implementing similar approaches. Based on this analysis and algorithm, we describe a library of our design, higher, which we share with the community to assist and enable future research into these kinds of meta-learning approaches. We end the paper by showcasing the practical applications of this framework and library through illustrative experiments and ablation studies which they facilitate.