SOTAVerified

Transductive Decoupled Variational Inference for Few-Shot Classification

2022-08-22Code Available1· sign in to hype

Anuj Singh, Hadi Jamali-Rad

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The versatility to learn from a handful of samples is the hallmark of human intelligence. Few-shot learning is an endeavour to transcend this capability down to machines. Inspired by the promise and power of probabilistic deep learning, we propose a novel variational inference network for few-shot classification (coined as TRIDENT) to decouple the representation of an image into semantic and label latent variables, and simultaneously infer them in an intertwined fashion. To induce task-awareness, as part of the inference mechanics of TRIDENT, we exploit information across both query and support images of a few-shot task using a novel built-in attention-based transductive feature extraction module (we call AttFEX). Our extensive experimental results corroborate the efficacy of TRIDENT and demonstrate that, using the simplest of backbones, it sets a new state-of-the-art in the most commonly adopted datasets miniImageNet and tieredImageNet (offering up to 4% and 5% improvements, respectively), as well as for the recent challenging cross-domain miniImagenet --> CUB scenario offering a significant margin (up to 20% improvement) beyond the best existing cross-domain baselines. Code and experimentation can be found in our GitHub repository: https://github.com/anujinho/trident

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Mini-Imagenet 5-way (1-shot)TRIDENTAccuracy86.11Unverified
Mini-Imagenet 5-way (5-shot)TRIDENTAccuracy95.95Unverified
Mini-ImageNet-CUB 5-way (1-shot)TRIDENTAccuracy84.61Unverified
Mini-ImageNet-CUB 5-way (5-shot)TRIDENTAccuracy80.74Unverified
Tiered ImageNet 5-way (1-shot)TRIDENTAccuracy86.97Unverified
Tiered ImageNet 5-way (5-shot)TRIDENTAccuracy96.57Unverified

Reproductions