Zero-Shot Learning
Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.
Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.
Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.
( Image credit: Prototypical Networks for Few shot Learning in PyTorch )
Further readings:
Papers
Showing 1–10 of 1864 papers
All datasetsCUB-200-2011MedConceptsQASUN AttributeAwA2Caltech-101CIFAR-10CIFAR-100COCO-MLTDTDFGVC-AircraftFlowers-102Food-101
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | ZeroDiff | average top-1 classification accuracy | 86.4 | — | Unverified |
| 2 | ZSL-KG | average top-1 classification accuracy | 78.08 | — | Unverified |
| 3 | ZSL_TF-VAEGAN | average top-1 classification accuracy | 72.2 | — | Unverified |
| 4 | DUET (Ours) | average top-1 classification accuracy | 69.9 | — | Unverified |