SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 801850 of 2964 papers

TitleStatusHype
Make SVM great again with Siamese kernel for few-shot learningCode0
An Open-set Recognition and Few-Shot Learning Dataset for Audio Event Classification in Domestic EnvironmentsCode0
MAD: Meta Adversarial Defense BenchmarkCode0
Anomaly Multi-classification in Industrial Scenarios: Transferring Few-shot Learning to a New TaskCode0
Low-Shot Learning for the Semantic Segmentation of Remote Sensing ImageryCode0
Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification TasksCode0
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEsCode0
MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model for Few-Shot Instance SegmentationCode0
MEAL: Stable and Active Learning for Few-Shot PromptingCode0
MetaGAD: Meta Representation Adaptation for Few-Shot Graph Anomaly DetectionCode0
LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine FeedbackCode0
Enhancing Masked Time-Series Modeling via Dropping PatchesCode0
Limited Data Rolling Bearing Fault Diagnosis with Few-shot LearningCode0
CANet: Class-Agnostic Segmentation Networks with Iterative Refinement and Attentive Few-Shot LearningCode0
LGM-Net: Learning to Generate Matching Networks for Few-Shot LearningCode0
Cancer Vaccine Adjuvant Name Recognition from Biomedical Literature using Large Language ModelsCode0
L-HYDRA: Multi-Head Physics-Informed Neural NetworksCode0
Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot LearningCode0
Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot LearningCode0
End-to-end Generative Zero-shot Learning via Few-shot LearningCode0
Leveraging Large Language Models for Scalable Vector Graphics-Driven Image UnderstandingCode0
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable PromptingCode0
Logarithm-transform aided Gaussian Sampling for Few-Shot LearningCode0
AniWho : A Quick and Accurate Way to Classify Anime Character Faces in ImagesCode0
Two Sides of Meta-Learning Evaluation: In vs. Out of DistributionCode0
Learning to Propagate Labels: Transductive Propagation Network for Few-shot LearningCode0
Learning to Propagate for Graph Meta-LearningCode0
Learning to Learn Variational Semantic MemoryCode0
Learning to Learn Kernels with Variational Random FeaturesCode0
Bayesian Active Meta-Learning for Few Pilot Demodulation and EqualizationCode0
A Bridge Between Hyperparameter Optimization and Learning-to-learnCode0
Learning to learn via Self-CritiqueCode0
Efficient Transfer Learning for Video-language Foundation ModelsCode0
Learning to Forget for Meta-LearningCode0
Learning to Learn By Self-CritiqueCode0
Enhancing Unsupervised Graph Few-shot Learning via Set Functions and Optimal TransportCode0
MA 3 : Model Agnostic Adversarial Augmentation for Few Shot learningCode0
Ensemble Model with Batch Spectral Regularization and Data Blending for Cross-Domain Few-Shot Learning with Unlabeled DataCode0
Leveraging Bottom-Up and Top-Down Attention for Few-Shot Object DetectionCode0
Learning Prototype Representations Across Few-Shot Tasks for Event DetectionCode0
Active Few-Shot Learning with FASLCode0
Learning New Tasks from a Few Examples with Soft-Label PrototypesCode0
Towards Context-Agnostic Learning Using Synthetic DataCode0
BRUNO: A Deep Recurrent Model for Exchangeable DataCode0
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and GenerationCode0
Learning from the Tangram to Solve Mini Visual TasksCode0
Effectiveness of Cross-linguistic Extraction of Genetic Information using Generative Large Language ModelsCode0
Bringing Masked Autoencoders Explicit Contrastive Properties for Point Cloud Self-Supervised LearningCode0
A New Dataset and Empirical Study for Sentence Simplification in ChineseCode0
Advancing Image Retrieval with Few-Shot Learning and Relevance FeedbackCode0
Show:102550
← PrevPage 17 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified