Multi-level Metric Learning for Few-shot Image Recognition
Haoxing Chen, Huaxiong Li, Yaohui Li, Chunlin Chen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/chenhaoxing/m2lOfficialIn paper★ 34
Abstract
Few-shot learning is devoted to training a model on few samples. Most of these approaches learn a model based on a pixel-level or global-level feature representation. However, using global features may lose local information, and using pixel-level features may lose the contextual semantics of the image. Moreover, such works can only measure the relations between them on a single level, which is not comprehensive and effective. And if query images can simultaneously be well classified via three distinct level similarity metrics, the query images within a class can be more tightly distributed in a smaller feature space, generating more discriminative feature maps. Motivated by this, we propose a novel Part-level Embedding Adaptation with Graph (PEAG) method to generate task-specific features. Moreover, a Multi-level Metric Learning (MML) method is proposed, which not only calculates the pixel-level similarity but also considers the similarity of part-level features and global-level features. Extensive experiments on popular few-shot image recognition datasets prove the effectiveness of our method compared with the state-of-the-art methods. Our code is available at https://github.com/chenhaoxing/M2L.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Stanford Cars 5-way (1-shot) | MML(KL) | Accuracy | 72.43 | — | Unverified |
| Stanford Cars 5-way (5-shot) | MML(KL) | Accuracy | 91.05 | — | Unverified |
| Stanford Dogs 5-way (1-shot) | MML(KL) | Accuracy | 59.05 | — | Unverified |
| Stanford Dogs 5-way (5-shot) | MML(KL) | Accuracy | 75.59 | — | Unverified |