Relational Embedding for Few-Shot Classification
Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/dahyun-kang/renetOfficialpytorch★ 122
Abstract
We propose to address the problem of few-shot classification by meta-learning "what to observe" and "where to attend" in a relational perspective. Our method leverages relational patterns within and between images via self-correlational representation (SCR) and cross-correlational attention (CCA). Within each image, the SCR module transforms a base feature map into a self-correlation tensor and learns to extract structural patterns from the tensor. Between the images, the CCA module computes cross-correlation between two image representations and learns to produce co-attention between them. Our Relational Embedding Network (RENet) combines the two relational modules to learn relational embedding in an end-to-end manner. In experimental evaluation, it achieves consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmarks of miniImageNet, tieredImageNet, CUB-200-2011, and CIFAR-FS.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CIFAR-FS 5-way (1-shot) | RENet | Accuracy | 74.51 | — | Unverified |
| CIFAR-FS 5-way (5-shot) | RENet | Accuracy | 86.6 | — | Unverified |
| CUB 200 5-way 1-shot | RENet | Accuracy | 79.49 | — | Unverified |
| CUB 200 5-way 5-shot | RENet | Accuracy | 91.11 | — | Unverified |
| Mini-Imagenet 5-way (1-shot) | RENet | Accuracy | 67.6 | — | Unverified |
| Mini-Imagenet 5-way (5-shot) | RENet | Accuracy | 82.58 | — | Unverified |
| Tiered ImageNet 5-way (1-shot) | RENet | Accuracy | 71.61 | — | Unverified |
| Tiered ImageNet 5-way (5-shot) | RENet | Accuracy | 85.28 | — | Unverified |