SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 201250 of 353 papers

TitleStatusHype
Probabilistic Model-Agnostic Meta-LearningCode0
Prototype Rectification for Few-Shot LearningCode0
RAFIC: Retrieval-Augmented Few-shot Image ClassificationCode0
Revisiting Local Descriptor based Image-to-Class Measure for Few-shot LearningCode0
Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot TasksCode0
Adversarially Robust Few-Shot Learning: A Meta-Learning ApproachCode0
Scaling Vision TransformersCode0
Self-Supervised Learning For Few-Shot Image ClassificationCode0
SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image ClassificationCode0
Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGDCode0
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head OptimizationCode0
Small Sample Hyperspectral Image Classification Based on the Random Patches Network and Recursive FilteringCode0
Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural NetworkCode0
Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-LearningCode0
Subspace Adaptation Prior for Few-Shot LearningCode0
TADAM: Task dependent adaptive metric for improved few-shot learningCode0
TextCaps : Handwritten Character Recognition with Very Small DatasetsCode0
Tiny models from tiny data: Textual and null-text inversion for few-shot distillationCode0
Towards a Neural StatisticianCode0
Uncertainty in Model-Agnostic Meta-Learning using Variational InferenceCode0
Unsupervised Image Classification for Deep Representation LearningCode0
Visual Representation Learning with Self-Supervised Attention for Low-Label High-data RegimeCode0
ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark EvaluationCode0
Toward Multimodal Model-Agnostic Meta-Learning0
Frozen Feature Augmentation for Few-Shot Image Classification0
Multi-scale Adaptive Task Attention Network for Few-Shot Learning0
Multi-Similarity Contrastive Learning0
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?0
Adaptive Dimension Reduction and Variational Inference for Transductive Few-Shot Classification0
MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene Classification0
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation0
Boosting Few-Shot Text Classification via Distribution Estimation0
NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results0
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models0
Object-Level Representation Learning for Few-Shot Image Classification0
Few-Shot Learning as Domain Adaptation: Algorithm and Analysis0
Few-shot Image Classification with Multi-Facet Prototypes0
Boosting Few-Shot Learning With Adaptive Margin Loss0
Few-Shot Image Classification via Contrastive Self-Supervised Learning0
Ontology-based n-ball Concept Embeddings Informing Few-shot Image Classification0
Few-shot Image Classification based on Gradual Machine Learning0
Optimal allocation of data across training tasks in meta-learning0
Optimized Generic Feature Learning for Few-shot Classification across Domains0
Few-Shot Image Classification and Segmentation as Visual Question Answering Using Vision-Language Models0
Baby steps towards few-shot learning with multiple semantics0
A Unified Framework with Meta-dropout for Few-shot Learning0
Few-Shot Image Classification Along Sparse Graphs0
Partner-Assisted Learning for Few-Shot Image Classification0
Few-Shot Classification & Segmentation Using Large Language Models Agent0
p-Meta: Towards On-device Deep Model Adaptation0
Show:102550
← PrevPage 5 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
6PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
9PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified