SOTAVerified

Uncertainty in Model-Agnostic Meta-Learning using Variational Inference

2019-07-27Code Available0· sign in to hype

Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability distribution of model parameter prior for few-shot learning. The proposed algorithm employs a gradient-based variational inference to infer the posterior of model parameters to a new task. Our algorithm can be applied to any model architecture and can be implemented in various machine learning paradigms, including regression and classification. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on two few-shot classification benchmarks (Omniglot and Mini-ImageNet), and competitive results in a multi-modal task-distribution regression.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Mini-Imagenet 5-way (1-shot)VAMPIREAccuracy51.54Unverified
Mini-Imagenet 5-way (5-shot)VAMPIREAccuracy64.31Unverified
OMNIGLOT - 1-Shot, 20-wayVAMPIREAccuracy93.2Unverified
OMNIGLOT - 1-Shot, 5-wayVAMPIREAccuracy98.43Unverified
OMNIGLOT - 5-Shot, 20-wayVAMPIREAccuracy98.52Unverified
OMNIGLOT - 5-Shot, 5-wayVAMPIREAccuracy99.56Unverified
Tiered ImageNet 5-way (1-shot)VAMPIREAccuracy69.87Unverified
Tiered ImageNet 5-way (5-shot)VAMPIREAccuracy82.7Unverified

Reproductions