SOTAVerified

Zero-Shot Image Classification

Zero-shot image classification is a technique in computer vision where a model can classify images into categories that were not present during training. This is achieved by leveraging semantic information about the categories, such as textual descriptions or relationships between classes.

Papers

Showing 125 of 111 papers

TitleStatusHype
CIBR: Cross-modal Information Bottleneck Regularization for Robust CLIP Generalization0
LRSCLIP: A Vision-Language Foundation Model for Aligning Remote Sensing Image with Longer TextCode1
Beyond the Visible: Multispectral Vision-Language Learning for Earth Observation0
Bayesian Test-Time Adaptation for Vision-Language Models0
MADS: Multi-Attribute Document Supervision for Zero-Shot Image Classification0
MedUnifier: Unifying Vision-and-Language Pre-training on Medical Data with Vision Generation Task using Discrete Visual Representations0
Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality InversionCode2
KPL: Training-Free Medical Knowledge Mining of Vision-Language ModelsCode0
Retaining Knowledge and Enhancing Long-Text Representations in CLIP through Dual-Teacher Distillation0
Post-hoc Probabilistic Vision-Language ModelsCode1
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance0
TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives0
TaxaBind: A Unified Embedding Space for Ecological ApplicationsCode1
Retrieval-enriched zero-shot image classification in low-resource domains0
Multilingual Vision-Language Pre-training for the Remote Sensing DomainCode0
Altogether: Image Captioning via Re-aligning Alt-textCode0
Open-vocabulary vs. Closed-set: Best Practice for Few-shot Object Detection Considering Text DescribabilityCode0
Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual KnowledgeCode1
CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features0
LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model0
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet UpcyclingCode2
DPA: Dual Prototypes Alignment for Unsupervised Adaptation of Vision-Language ModelsCode0
Do Vision-Language Foundational models show Robust Visual Perception?Code0
CoAPT: Context Attribute words for Prompt Tuning0
Unconstrained Open Vocabulary Image Classification: Zero-Shot Transfer from Text to Image via CLIP InversionCode0
Show:102550
← PrevPage 1 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1OpenClip H/14 (34B)(Laion2B)Top-1 accuracy30.01Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP (ViT B-32)Average Score56.64Unverified
#ModelMetricClaimedVerifiedStatus
1GLIP (Tiny A)Average Score11.4Unverified