SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 101125 of 353 papers

TitleStatusHype
Constellation Nets for Few-Shot LearningCode1
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAMLCode1
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few ExamplesCode1
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?Code1
Context-Aware Meta-LearningCode1
Scaling Vision with Sparse Mixture of ExpertsCode1
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image ClassificationCode1
Self-Supervision Can Be a Good Few-Shot LearnerCode1
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural NetworksCode1
Shallow Bayesian Meta Learning for Real-World Few-Shot RecognitionCode1
Few-Shot Image Classification Benchmarks are Too Far From Reality: Build Back Better with Semantic Task SamplingCode1
Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple ClassifierCode1
Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot ClassificationCode1
Sparse Spatial Transformers for Few-Shot LearningCode1
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot LearningCode1
Cross-domain Few-shot Learning with Task-specific AdaptersCode1
Few-Shot Learning by Integrating Spatial and Frequency RepresentationCode1
Improving ProtoNet for Few-Shot Video Object Recognition: Winner of ORBIT Challenge 2022Code1
Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot Image ClassificationCode1
Instance Credibility Inference for Few-Shot LearningCode1
Few-shot Relational Reasoning via Connection Subgraph PretrainingCode1
FewVS: A Vision-Semantics Integration Framework for Few-Shot Image ClassificationCode1
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object InteractionsCode1
Few-Shot Classification with Feature Map Reconstruction NetworksCode1
Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot ClassificationCode1
Show:102550
← PrevPage 5 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
6PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
9PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified