SOTAVerified

Zero-Shot Transfer Image Classification

Papers

Showing 110 of 19 papers

TitleStatusHype
EVA-CLIP-18B: Scaling CLIP to 18 Billion ParametersCode0
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient PretrainingCode0
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic TasksCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception0
Your Diffusion Model is Secretly a Zero-Shot ClassifierCode2
EVA-CLIP: Improved Training Techniques for CLIP at ScaleCode1
The effectiveness of MAE pre-pretraining for billion-scale pretrainingCode1
Scaling Vision Transformers to 22 Billion ParametersCode0
Learning Customized Visual Models with Retrieval-Augmented KnowledgeCode1
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LiT-22BAccuracy (Private)87.6Unverified
2LiT ViT-eAccuracy (Private)84.9Unverified
3CoCaAccuracy (Private)82.7Unverified
4EVA-CLIP-18BAccuracy (Private)82.2Unverified
5LiT-tuningAccuracy (Private)81.1Unverified
6InternVL-CAccuracy (Private)80.6Unverified
7EVA-CLIP-E/14+Accuracy (Private)79.6Unverified
8CLIPAccuracy (Private)72.3Unverified
9PaLIAccuracy (Private)42.62Unverified