Evaluation of Output Embeddings for Fine-Grained Image Classification
Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, Bernt Schiele
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with fine-grained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CUB-200 - 0-Shot Learning | SJE | Accuracy | 50.1 | — | Unverified |
| CUB-200-2011 - 0-Shot | SJE | Top-1 Accuracy | 50.1 | — | Unverified |
| CUB 200 50-way (0-shot) | SJE Akata et al. (2015) | Accuracy | 50.1 | — | Unverified |