SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 125 of 353 papers

TitleStatusHype
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual ModelsCode4
The Balanced-Pairwise-Affinities Feature TransformCode2
Prototypical Networks for Few-shot LearningCode2
GalLoP: Learning Global and Local Prompts for Vision-Language ModelsCode2
Effective Data Augmentation With Diffusion ModelsCode2
Learning Transferable Visual Models From Natural Language SupervisionCode2
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and TransportationCode2
LibFewShot: A Comprehensive Library for Few-shot LearningCode2
Disentangled Feature Representation for Few-shot Image ClassificationCode1
Adaptive Subspaces for Few-Shot LearningCode1
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object InteractionsCode1
Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot Image ClassificationCode1
Bayesian Meta-Learning for the Few-Shot Setting via Deep KernelsCode1
Diffusion Mechanism in Residual Neural Network: Theory and ApplicationsCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image ClassificationCode1
BaseTransformers: Attention over base data-points for One Shot LearningCode1
Class-Aware Patch Embedding Adaptation for Few-Shot Image ClassificationCode1
Debiased Learning from Naturally Imbalanced Pseudo-LabelsCode1
Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot LearningCode1
Constellation Nets for Few-Shot LearningCode1
Context-Aware Meta-LearningCode1
A Universal Representation Transformer Layer for Few-Shot Image ClassificationCode1
Automated Relational Meta-learningCode1
Attentive Weights Generation for Few Shot Learning via Information MaximizationCode1
Show:102550
← PrevPage 1 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
6PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
9PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified