SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 301350 of 2964 papers

TitleStatusHype
EPCL: Frozen CLIP Transformer is An Efficient Point Cloud EncoderCode1
Finetune like you pretrain: Improved finetuning of zero-shot vision modelsCode1
Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot Image ClassificationCode1
Better Generalized Few-Shot Learning Even Without Base DataCode1
RankDNN: Learning to Rank for Few-shot LearningCode1
TEMPERA: Test-Time Prompting via Reinforcement LearningCode1
QAmeleon: Multilingual QA with Only 5 ExamplesCode1
Retrieval-Augmented Generative Question Answering for Event Argument ExtractionCode1
AdaptKeyBERT: An Attention-Based approach towards Few-Shot & Zero-Shot Domain Adaptation of KeyBERTCode1
Enhancing Few-shot Image Classification with Cosine TransformerCode1
Point-DAE: Denoising Autoencoders for Self-supervised Point Cloud LearningCode1
miCSE: Mutual Information Contrastive Learning for Low-shot Sentence EmbeddingsCode1
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot LearningCode1
Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the CentroidCode1
Better Few-Shot Relation Extraction with Label Prompt DropoutCode1
Contrastive Prototypical Network with Wasserstein Confidence PenaltyCode1
Graphs, Constraints, and Search for the Abstraction and Reasoning CorpusCode1
Meta-Learning via Classifier(-free) Diffusion GuidanceCode1
RARR: Researching and Revising What Language Models Say, Using Language ModelsCode1
Multitask Pre-training of Modular Prompt for Chinese Few-Shot LearningCode1
Unified Vision and Language Prompt LearningCode1
Self-Attention Message Passing for Contrastive Few-Shot LearningCode1
Instruction Tuning for Few-Shot Aspect-Based Sentiment AnalysisCode1
ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot LearningCode1
Hypernetwork approach to Bayesian MAMLCode1
Bayesian Prompt Learning for Image-Language Model GeneralizationCode1
Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground AlignmentCode1
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-trainingCode1
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language ModelsCode1
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot LearningCode1
Automatic Label Sequence Generation for Prompting Sequence-to-sequence ModelsCode1
FETA: Towards Specializing Foundation Models for Expert Task ApplicationsCode1
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal PerspectiveCode1
Adversarial Feature Augmentation for Cross-domain Few-shot ClassificationCode1
Transductive Decoupled Variational Inference for Few-Shot ClassificationCode1
Hierarchical Attention Network for Few-Shot Object Detection via Meta-Contrastive LearningCode1
GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural NetworksCode1
Few-shot Adaptation Works with UnpredicTable DataCode1
Few-shot Learning with Class-Covariance Metric for Hyperspectral Image ClassificationCode1
Rethinking Few-Shot Object Detection on a Multi-Domain BenchmarkCode1
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning ApproachCode1
Self-Supervision Can Be a Good Few-Shot LearnerCode1
Learn-to-Decompose: Cascaded Decomposition Network for Cross-Domain Few-Shot Facial Expression RecognitionCode1
Segment-level Metric Learning for Few-shot Bioacoustic Event DetectionCode1
Convolutional Bypasses Are Better Vision Transformer AdaptersCode1
Tree Structure-Aware Few-Shot Image Classification via Hierarchical AggregationCode1
ViQuAE, a Dataset for Knowledge-based Visual Question Answering about Named EntitiesCode1
Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image ClassificationCode1
Task-Adaptive Few-shot Node ClassificationCode1
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image ClassificationCode1
Show:102550
← PrevPage 7 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified