SOTAVerified

Zero-Shot Transfer Image Classification

Papers

Showing 110 of 19 papers

TitleStatusHype
EVA-CLIP-18B: Scaling CLIP to 18 Billion ParametersCode0
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient PretrainingCode0
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception0
Your Diffusion Model is Secretly a Zero-Shot ClassifierCode2
EVA-CLIP: Improved Training Techniques for CLIP at ScaleCode1
The effectiveness of MAE pre-pretraining for billion-scale pretrainingCode1
Scaling Vision Transformers to 22 Billion ParametersCode0
Learning Customized Visual Models with Retrieval-Augmented KnowledgeCode1
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CoCaAccuracy (Private)77.6Unverified
2BASIC (Lion)Accuracy (Private)77.2Unverified
3BASICAccuracy (Private)76.1Unverified
4EVA-CLIP-18BAccuracy (Private)74.7Unverified
5InternVL-CAccuracy (Private)73.9Unverified
6EVA-CLIP-E/14+Accuracy (Private)71.6Unverified
7AltCLIPAccuracy (Private)58.7Unverified