Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis
2021-09-29NeurIPS 2021Code Available0· sign in to hype
Qi Chen, Changjian Shui, Mario Marchand
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/livreq/meta-sgldOfficialIn paperpytorch★ 2
Abstract
We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms. Concretely, our analysis proposes a generic understanding of both the conventional learning-to-learn framework and the modern model-agnostic meta-learning (MAML) algorithms. Moreover, we provide a data-dependent generalization bound for a stochastic variant of MAML, which is non-vacuous for deep few-shot learning. As compared to previous bounds that depend on the square norm of gradients, empirical validations on both simulated data and a well-known few-shot benchmark show that our bound is orders of magnitude tighter in most situations.