SOTAVerified

Zero-Shot Learning

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Papers

Showing 110 of 1864 papers

TitleStatusHype
GLAD: Generalizable Tuning for Vision-Language Models0
DEARLi: Decoupled Enhancement of Recognition and Localization for Semi-supervised Panoptic SegmentationCode0
Zero-Shot Learning for Obsolescence Risk Forecasting0
EVA: Mixture-of-Experts Semantic Variant Alignment for Compositional Zero-Shot Learning0
SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition NetworkCode0
A Multi-Scale Spatial Attention-Based Zero-Shot Learning Framework for Low-Light Image Enhancement0
Generalizable Agent Modeling for Agent Collaboration-Competition Adaptation with Multi-Retrieval and Dynamic GenerationCode0
AnyTraverse: An off-road traversability framework with VLM and human operator in the loop0
OTFusion: Bridging Vision-only and Vision-Language Models via Optimal Transport for Transductive Zero-Shot Learning0
Comparison of ConvNeXt and Vision-Language Models for Breast Density Assessment in Screening Mammography0
Show:102550
← PrevPage 1 of 187Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy93.6Unverified
2ZLaPAccuracy93.4Unverified