SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 5175 of 353 papers

TitleStatusHype
The Self-Optimal-Transport Feature TransformCode1
HyperShot: Few-Shot Learning by Kernel HyperNetworksCode1
Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot LearningCode1
Worst Case Matters for Few-Shot RecognitionCode1
EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple IngredientsCode1
HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot LearningCode1
Debiased Learning from Naturally Imbalanced Pseudo-LabelsCode1
Transformers Can Do Bayesian InferenceCode1
On sensitivity of meta-learning to support dataCode1
Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot LearningCode1
On the Importance of Firth Bias Reduction in Few-Shot ClassificationCode1
Sparse Spatial Transformers for Few-Shot LearningCode1
Disentangled Feature Representation for Few-shot Image ClassificationCode1
Relational Embedding for Few-Shot ClassificationCode1
Prototype Completion for Few-Shot LearningCode1
Rectifying the Shortcut Learning of Background for Few-Shot LearningCode1
Memory Efficient Meta-Learning with Large ImagesCode1
Cross-domain Few-shot Learning with Task-specific AdaptersCode1
SITTA: Single Image Texture Translation for Data AugmentationCode1
Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot ClassificationCode1
Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective AdaptationCode1
Scaling Vision with Sparse Mixture of ExpertsCode1
Few-Shot Learning by Integrating Spatial and Frequency RepresentationCode1
Diffusion Mechanism in Residual Neural Network: Theory and ApplicationsCode1
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
Show:102550
← PrevPage 3 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
6PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
9PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified