A Universal Representation Transformer Layer for Few-Shot Image Classification
Lu Liu, William Hamilton, Guodong Long, Jing Jiang, Hugo Larochelle
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/liulu112601/URTOfficialIn paperpytorch★ 105
Abstract
Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it achieves top-performance on the highest number of data sources compared to competing methods. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization. Our code is available at https://github.com/liulu112601/URT.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Meta-Dataset | URT | Accuracy | 72.15 | — | Unverified |
| Meta-Dataset Rank | URT | Mean Rank | 2.85 | — | Unverified |