SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 226250 of 353 papers

TitleStatusHype
Multi-scale Adaptive Task Attention Network for Few-Shot Learning0
Multi-Similarity Contrastive Learning0
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation0
Transfer Learning on Manifolds via Learned Transport Operators0
MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene Classification0
Few-Shot Learning as Domain Adaptation: Algorithm and Analysis0
Asymmetric Distribution Measure for Few-shot Learning0
NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results0
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models0
Object-Level Representation Learning for Few-Shot Image Classification0
Few-shot Image Classification with Multi-Facet Prototypes0
Few-Shot Image Classification via Contrastive Self-Supervised Learning0
Assessing two novel distance-based loss functions for few-shot image classification0
Few-shot Image Classification based on Gradual Machine Learning0
Ontology-based n-ball Concept Embeddings Informing Few-shot Image Classification0
Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models0
Optimal allocation of data across training tasks in meta-learning0
Optimized Generic Feature Learning for Few-shot Classification across Domains0
Few-Shot Image Classification Along Sparse Graphs0
PAC-Bayes meta-learning with implicit task-specific posteriors0
PaLI: A Jointly-Scaled Multilingual Language-Image Model0
Few-Shot Classification & Segmentation Using Large Language Models Agent0
Partner-Assisted Learning for Few-Shot Image Classification0
Few-Shot Action Recognition with Compromised Metric via Optimal Transport0
p-Meta: Towards On-device Deep Model Adaptation0
Show:102550
← PrevPage 10 of 15Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
6PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
9PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified