SOTAVerified

Zero-Shot Transfer Image Classification

Papers

Showing 110 of 19 papers

TitleStatusHype
EVA-CLIP-18B: Scaling CLIP to 18 Billion ParametersCode0
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient PretrainingCode0
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception0
Your Diffusion Model is Secretly a Zero-Shot ClassifierCode2
EVA-CLIP: Improved Training Techniques for CLIP at ScaleCode1
The effectiveness of MAE pre-pretraining for billion-scale pretrainingCode1
Scaling Vision Transformers to 22 Billion ParametersCode0
Learning Customized Visual Models with Retrieval-Augmented KnowledgeCode1
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CoCaAccuracy (Private)90.2Unverified
2LiT-22BAccuracy (Private)90.1Unverified
3LiT ViT-eAccuracy (Private)88Unverified
4EVA-CLIP-18BAccuracy (Private)87.3Unverified
5BASIC (Lion)Accuracy (Private)86.4Unverified
6BASICAccuracy (Private)85.6Unverified
7InternVL-CAccuracy (Private)83.8Unverified
8EVA-CLIP-E/14+Accuracy (Private)82.1Unverified
9LiT-tuningAccuracy (Private)79.4Unverified
10CLIPAccuracy (Private)77.2Unverified