SOTAVerified

Few-Shot Image Classification

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically ( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Papers

Showing 150 of 353 papers

TitleStatusHype
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual ModelsCode4
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and TransportationCode2
GalLoP: Learning Global and Local Prompts for Vision-Language ModelsCode2
The Balanced-Pairwise-Affinities Feature TransformCode2
Effective Data Augmentation With Diffusion ModelsCode2
LibFewShot: A Comprehensive Library for Few-shot LearningCode2
Learning Transferable Visual Models From Natural Language SupervisionCode2
Prototypical Networks for Few-shot LearningCode2
FewVS: A Vision-Semantics Integration Framework for Few-Shot Image ClassificationCode1
Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention MechanismsCode1
Leveraging Cross-Modal Neighbor Representation for Improved CLIP ClassificationCode1
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated ExpertsCode1
BECLR: Batch Enhanced Contrastive Few-Shot LearningCode1
Transductive Zero-Shot and Few-Shot CLIPCode1
Large Language Models are Good Prompt Learners for Low-Shot Image ClassificationCode1
Diversified in-domain synthesis with efficient fine-tuning for few-shot classificationCode1
Simple Semantic-Aided Few-Shot LearningCode1
Context-Aware Meta-LearningCode1
SemiReward: A General Reward Model for Semi-supervised LearningCode1
Language Models as Black-Box Optimizers for Vision-Language ModelsCode1
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot LearningCode1
Distilling Large Vision-Language Model with Out-of-Distribution GeneralizabilityCode1
Multistage Relation Network With Dual-Metric for Few-Shot Hyperspectral Image ClassificationCode1
ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot LearningCode1
Meta-Learning with a Geometry-Adaptive PreconditionerCode1
VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue DistributionCode1
Prompt Tuning based Adapter for Vision-Language Model AdaptionCode1
The effectiveness of MAE pre-pretraining for billion-scale pretrainingCode1
Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image ClassificationCode1
Language Quantized AutoEncoders: Towards Unsupervised Text-Image AlignmentCode1
Open-Set Likelihood Maximization for Few-Shot LearningCode1
Class-Aware Patch Embedding Adaptation for Few-Shot Image ClassificationCode1
Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot Image ClassificationCode1
Enhancing Few-shot Image Classification with Cosine TransformerCode1
Unsupervised Few-Shot Image Classification by Learning Features into Clustering SpaceCode1
Few-shot Relational Reasoning via Connection Subgraph PretrainingCode1
BaseTransformers: Attention over base data-points for One Shot LearningCode1
Improving ProtoNet for Few-Shot Video Object Recognition: Winner of ORBIT Challenge 2022Code1
Transductive Decoupled Variational Inference for Few-Shot ClassificationCode1
Self-Supervision Can Be a Good Few-Shot LearnerCode1
Tree Structure-Aware Few-Shot Image Classification via Hierarchical AggregationCode1
Task Discrepancy Maximization for Fine-grained Few-Shot ClassificationCode1
Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image ClassificationCode1
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image ClassificationCode1
Channel Importance Matters in Few-Shot Image ClassificationCode1
Rethinking Generalization in Few-Shot ClassificationCode1
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object InteractionsCode1
Few-Shot Image Classification Benchmarks are Too Far From Reality: Build Back Better with Semantic Task SamplingCode1
Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a DifferenceCode1
Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot ClassificationCode1
Show:102550
← PrevPage 1 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy97.95Unverified
2CAML [Laion-2b]Accuracy96.2Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy95.3Unverified
4TRIDENTAccuracy86.11Unverified
5PT+MAP+SF+BPA (transductive)Accuracy85.59Unverified
6PT+MAP+SF+SOT (transductive)Accuracy85.59Unverified
7PEMnE-BMS* (transductive)Accuracy85.54Unverified
8PT+MAP (s+f) (transductive)Accuracy84.81Unverified
9BAVARDAGEAccuracy84.8Unverified
10EASY 3xResNet12 (transductive)Accuracy84.04Unverified
#ModelMetricClaimedVerifiedStatus
1SgVA-CLIPAccuracy98.72Unverified
2CAML [Laion-2b]Accuracy98.6Unverified
3P>M>F (P=DINO-ViT-base, M=ProtoNet)Accuracy98.4Unverified
4TRIDENTAccuracy95.95Unverified
5BAVARDAGEAccuracy91.65Unverified
6PEMnE-BMS*(transductive)Accuracy91.53Unverified
7Transductive CNAPS + FETIAccuracy91.5Unverified
8PT+MAP+SF+SOT (transductive)Accuracy91.34Unverified
9PT+MAP+SF+BPA (transductive)Accuracy91.34Unverified
10AmdimNetAccuracy90.98Unverified