SOTAVerified

Zero-Shot Image Classification

Zero-shot image classification is a technique in computer vision where a model can classify images into categories that were not present during training. This is achieved by leveraging semantic information about the categories, such as textual descriptions or relationships between classes.

Papers

Showing 125 of 111 papers

TitleStatusHype
Chinese CLIP: Contrastive Vision-Language Pretraining in ChineseCode5
AltCLIP: Altering the Language Encoder in CLIP for Extended Language CapabilitiesCode4
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual ModelsCode4
PromptKD: Unsupervised Prompt Distillation for Vision-Language ModelsCode3
Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality InversionCode2
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet UpcyclingCode2
PathGen-1.6M: 1.6 Million Pathology Image-text Pairs Generation through Multi-agent CollaborationCode2
Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIPCode2
WATT: Weight Average Test-Time Adaptation of CLIPCode2
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense PredictionCode2
RemoteCLIP: A Vision Language Foundation Model for Remote SensingCode2
What does a platypus look like? Generating customized prompts for zero-shot image classificationCode2
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text SupervisionCode2
LRSCLIP: A Vision-Language Foundation Model for Aligning Remote Sensing Image with Longer TextCode1
Post-hoc Probabilistic Vision-Language ModelsCode1
TaxaBind: A Unified Embedding Space for Ecological ApplicationsCode1
Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual KnowledgeCode1
Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive LearningCode1
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive LearningCode1
Learn "No" to Say "Yes" Better: Improving Vision-Language Models via NegationsCode1
Can We Talk Models Into Seeing the World Differently?Code1
PerceptionCLIP: Visual Classification by Inferring and Conditioning on ContextsCode1
PromptStyler: Prompt-driven Style Generation for Source-free Domain GeneralizationCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Show:102550
← PrevPage 1 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1OpenClip H/14 (34B)(Laion2B)Top-1 accuracy30.01Unverified
#ModelMetricClaimedVerifiedStatus
1CLIP (ViT B-32)Average Score56.64Unverified
#ModelMetricClaimedVerifiedStatus
1GLIP (Tiny A)Average Score11.4Unverified