SOTAVerified

Zero-Shot Learning

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Papers

Showing 651700 of 1864 papers

TitleStatusHype
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and PruningCode1
Evaluating the Fairness of Discriminative Foundation Models in Computer VisionCode0
ChatGPT-guided Semantics for Zero-shot LearningCode0
Estimating Uncertainty in Multimodal Foundation Models using Public Internet DataCode0
Prompting Scientific Names for Zero-Shot Species Recognition0
LLM-augmented Preference Learning from Natural Language0
ZEST: Attention-based Zero-Shot Learning for Unseen IoT Device ClassificationCode0
Attribute Localization and Revision Network for Zero-Shot Learning0
VeCLIP: Improving CLIP Training via Visual-enriched CaptionsCode2
Blind Dates: Examining the Expression of Temporality in Historical Photographs0
Uni3D: Exploring Unified 3D Representation at ScaleCode2
Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift0
Understanding prompt engineering may not require rethinking generalization0
Zero-shot Learning of Drug Response Prediction for Preclinical Drug ScreeningCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Investigating the Limitation of CLIP Models: The Worst-Performing Categories0
Time-LLM: Time Series Forecasting by Reprogramming Large Language ModelsCode4
DST-Det: Simple Dynamic Self-Training for Open-Vocabulary Object DetectionCode1
Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP0
An easy zero-shot learning combination: Texture Sensitive Semantic Segmentation IceHrNet and Advanced Style Transfer Learning StrategyCode0
One for All: Towards Training One Graph Model for All Classification TasksCode2
Telling Stories for Common Sense Zero-Shot Action RecognitionCode0
Robust Internal Representations for Domain Generalization0
VPA: Fully Test-Time Visual Prompt Adaptation0
Are Human-generated Demonstrations Necessary for In-context Learning?Code1
CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-FreeCode1
Dual Feature Augmentation Network for Generalized Zero-shot LearningCode1
Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic SegmentationCode1
Exploiting CLIP-based Multi-modal Approach for Artwork Classification and Retrieval0
Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters0
Auto-ACD: A Large-scale Dataset for Audio-Language Representation Learning0
DreamLLM: Synergistic Multimodal Comprehension and CreationCode2
Harnessing the Zero-Shot Power of Instruction-Tuned Large Language Model in End-to-End Speech Recognition0
PolicyGPT: Automated Analysis of Privacy Policies with Large Language Models0
Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMsCode1
Using Large Language Model to Solve and Explain Physics Word Problems Approaching Human Level0
Exploring Meta Information for Audio-based Zero-shot Bird ClassificationCode0
Large Language Models Can Infer Psychological Dispositions of Social Media Users0
Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic PromptingCode0
TAP: Targeted Prompting for Task Adaptive Generation of Textual Training Instances for Visual ClassificationCode1
Instance Adaptive Prototypical Contrastive Embedding for Generalized Zero Shot Learning0
Zero-Shot Visual Classification with Guided Cropping0
Enhancing Representation in Radiography-Reports Foundation Model: A Granular Alignment Algorithm Using Masked Contrastive LearningCode1
Zero-shot Learning with Minimum Instruction to Extract Social Determinants and Family History from Clinical Notes using GPT Model0
Mitigating Word Bias in Zero-shot Prompt-based ClassifiersCode0
Context-Aware Prompt Tuning for Vision-Language Model with Dual-Alignment0
ETP: Learning Transferable ECG Representations via ECG-Text Pre-training0
AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models0
Bridging the Projection Gap: Overcoming Projection Bias Through Parameterized Distance Learning0
EdaDet: Open-Vocabulary Object Detection Using Early Dense Alignment0
Show:102550
← PrevPage 14 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ZeroDiffaverage top-1 classification accuracy87.5Unverified
2DUETaverage top-1 classification accuracy72.3Unverified
3Composeraverage top-1 classification accuracy69.4Unverified
4HDC-ZSC-MLPaverage top-1 classification accuracy65.6Unverified
5ZSL_TF-VAEGANaverage top-1 classification accuracy64.9Unverified
6ZLaPAccuracy64.3Unverified
7ZLaP*Accuracy64.2Unverified
8HDC-ZSCaverage top-1 classification accuracy63.8Unverified
9SPOTaverage top-1 classification accuracy62.9Unverified
10f-VAEGAN-D2average top-1 classification accuracy61Unverified
#ModelMetricClaimedVerifiedStatus
1dmis-lab/biobert-v1.1Accuracy26.15Unverified
2meta-llama/Meta-Llama-3-8B-InstructAccuracy25.84Unverified
3epfl-llm/meditron-7bAccuracy25.75Unverified
4dmis-lab/meerkat-7b-v1.0Accuracy25.68Unverified
5meta-llama/Meta-Llama-3-8B-InstructAccuracy25.65Unverified
6HuggingFaceH4/zephyr-7b-betaAccuracy25.54Unverified
7dmis-lab/biobert-v1.1Accuracy25.46Unverified
8epfl-llm/meditron-70bAccuracy25.36Unverified
9epfl-llm/meditron-70bAccuracy25.26Unverified
10HuggingFaceH4/zephyr-7b-betaAccuracy25.06Unverified
#ModelMetricClaimedVerifiedStatus
1ZeroDiffaverage top-1 classification accuracy77.3Unverified
2SPOT (VAEGAN)average top-1 classification accuracy66.04Unverified
3ZSL_TF-VAEGANaverage top-1 classification accuracy66Unverified
4f-VAEGANaverage top-1 classification accuracy64.7Unverified
5DUET (Ours)average top-1 classification accuracy64.4Unverified
6LisGANaverage top-1 classification accuracy61.7Unverified
7TCNaverage top-1 classification accuracy61.5Unverified
8f-CLSWGANaverage top-1 classification accuracy60.8Unverified
9Cycle-WGANaverage top-1 classification accuracy59.9Unverified
#ModelMetricClaimedVerifiedStatus
1ZeroDiffaverage top-1 classification accuracy86.4Unverified
2ZSL-KGaverage top-1 classification accuracy78.08Unverified
3ZSL_TF-VAEGANaverage top-1 classification accuracy72.2Unverified
4DUET (Ours)average top-1 classification accuracy69.9Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPAccuracy84Unverified
2ZLaP*Accuracy83.1Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy93.6Unverified
2ZLaPAccuracy93.4Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy74.2Unverified
2ZLaPAccuracy74Unverified
#ModelMetricClaimedVerifiedStatus
1ViT-B/16Average mAP60.17Unverified
2ResNet-50Average mAP56.19Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPAccuracy51.2Unverified
2ZLaP*Accuracy51Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPAccuracy29.1Unverified
2ZLaP*Accuracy29Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPAccuracy75.9Unverified
2ZLaP*Accuracy75.5Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy87.9Unverified
2ZLaPAccuracy87.8Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPTop 1 Accuracy72.1Unverified
2ZLaP*Top 1 Accuracy72.1Unverified
#ModelMetricClaimedVerifiedStatus
1HiTeAAccuracy21.7Unverified
2HiTeAAccuracy0.46Unverified
#ModelMetricClaimedVerifiedStatus
1HiTeAAccuracy37.4Unverified
2HiTeAAccuracy0.56Unverified
#ModelMetricClaimedVerifiedStatus
1SPOTaverage top-1 classification accuracy71.9Unverified
2ZSL_TF-VAEGANaverage top-1 classification accuracy70.8Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaPAccuracy90Unverified
2ZLaP*Accuracy89Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy71.8Unverified
2ZLaPAccuracy71.2Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy71.4Unverified
2ZLaPAccuracy71Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy76.3Unverified
2ZLaPAccuracy76.3Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP(ViT-B/16)Average mAP85.77Unverified
2CLIP(ResNet-50)Average mAP84.3Unverified
#ModelMetricClaimedVerifiedStatus
1ZSL-KGTop-160.54Unverified
#ModelMetricClaimedVerifiedStatus
1zsl_ADAAverage Per-Class Accuracy70.9Unverified
#ModelMetricClaimedVerifiedStatus
1ZLaP*Accuracy63.2Unverified
#ModelMetricClaimedVerifiedStatus
1MSDAPearson correlation coefficient (PCC)0.52Unverified
#ModelMetricClaimedVerifiedStatus
1SeViLAAccuracy72.3Unverified
#ModelMetricClaimedVerifiedStatus
1M^2-EncoderAccuracy80.7Unverified
#ModelMetricClaimedVerifiedStatus
1FrozenBiLMAccuracy51.5Unverified
#ModelMetricClaimedVerifiedStatus
1CZSLA-acc36Unverified
#ModelMetricClaimedVerifiedStatus
1ZS3Netk=10 mIOU26.3Unverified
#ModelMetricClaimedVerifiedStatus
1ZSL-KGAccuracy88.98Unverified
#ModelMetricClaimedVerifiedStatus
1VideoChat2Accuracy40.6Unverified