SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 125 of 353 papers

TitleStatusHype
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual ModelsCode4
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and TransportationCode2
GalLoP: Learning Global and Local Prompts for Vision-Language ModelsCode2
The Balanced-Pairwise-Affinities Feature TransformCode2
Effective Data Augmentation With Diffusion ModelsCode2
LibFewShot: A Comprehensive Library for Few-shot LearningCode2
Learning Transferable Visual Models From Natural Language SupervisionCode2
Prototypical Networks for Few-shot LearningCode2
FewVS: A Vision-Semantics Integration Framework for Few-Shot Image ClassificationCode1
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention MechanismsCode1
Leveraging Cross-Modal Neighbor Representation for Improved CLIP ClassificationCode1
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated ExpertsCode1
BECLR: Batch Enhanced Contrastive Few-Shot LearningCode1
Transductive Zero-Shot and Few-Shot CLIPCode1
Large Language Models are Good Prompt Learners for Low-Shot Image ClassificationCode1
Diversified in-domain synthesis with efficient fine-tuning for few-shot classificationCode1
Simple Semantic-Aided Few-Shot LearningCode1
Context-Aware Meta-LearningCode1
SemiReward: A General Reward Model for Semi-supervised LearningCode1
Language Models as Black-Box Optimizers for Vision-Language ModelsCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot LearningCode1
Multistage Relation Network With Dual-Metric for Few-Shot Hyperspectral Image ClassificationCode1
ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot LearningCode1
Meta-Learning with a Geometry-Adaptive PreconditionerCode1
Show:102550
← PrevPage 1 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
6PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
9PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified