SOTAVerified

Adaptation-Agnostic Meta-Training

2021-08-24ICML Workshop AutoML 2021Code Available0· sign in to hype

Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update. The normal meta-training strategy needs to differentiate through the inner-task adaptation procedure to optimize the meta-parameters. This leads to a constraint that the inner-task algorithms should be solved analytically. Under this constraint, only simple algorithms with analytical solutions can be applied as the inner-task algorithms, limiting the model expressiveness. To lift the limitation, we propose an adaptation-agnostic meta-training strategy. Following our proposed strategy, we can apply stronger algorithms (e.g., an ensemble of different types of algorithms) as the inner-task algorithm to achieve superior performance comparing with popular baselines. The source code is available at https://github.com/jiaxinchen666/AdaptationAgnosticMetaLearning.

Tasks

Reproductions