SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 24512500 of 2964 papers

TitleStatusHype
RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional InformationCode0
Does language help generalization in vision models?Code0
Chatbots Are Not Reliable Text AnnotatorsCode0
LaSO: Label-Set Operations networks for multi-label few-shot learningCode0
RAMario: Experimental Approach to Reptile Algorithm -- Reinforcement Learning for MarioCode0
DocLangID: Improving Few-Shot Training to Identify the Language of Historical DocumentsCode0
AugGPT: Leveraging ChatGPT for Text Data AugmentationCode0
Diversity with Cooperation: Ensemble Methods for Few-Shot ClassificationCode0
Low-Shot Learning for the Semantic Segmentation of Remote Sensing ImageryCode0
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEsCode0
Reading ability detection using eye-tracking data with LSTM-based few-shot learningCode0
Diversity Transfer Network for Few-Shot LearningCode0
Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot ClassificationCode0
MA 3 : Model Agnostic Adversarial Augmentation for Few Shot learningCode0
Diverse Retrieval-Augmented In-Context Learning for Dialogue State TrackingCode0
MAD: Meta Adversarial Defense BenchmarkCode0
Unsupervised Question Answering via Answer DiversifyingCode0
STraTA: Self-Training with Task Augmentation for Better Few-shot LearningCode0
Large-Scale Few-Shot Learning: Knowledge Transfer With Class HierarchyCode0
Large Language Models Vote: Prompting for Rare Disease IdentificationCode0
Make SVM great again with Siamese kernel for few-shot learningCode0
Unsupervised Representation Learning Meets Pseudo-Label Supervised Self-Distillation: A New Approach to Rare Disease ClassificationCode0
Large Language Models as Attribution Regularizers for Efficient Model TrainingCode0
Reasoning Graph Enhanced Exemplars Retrieval for In-Context LearningCode0
Unsupervised Representation Learning to Aid Semi-Supervised Meta LearningCode0
Diverse Few-Shot Text Classification with Multiple MetricsCode0
Direct multimodal few-shot learning of speech and imagesCode0
Large Language Models are biased to overestimate profoundnessCode0
Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic TransformationsCode0
Knowledge Graph Transfer Network for Few-Shot RecognitionCode0
DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank CorrelationCode0
Dictionary-Assisted Supervised Contrastive LearningCode0
Knowledge-Enhanced Multi-Label Few-Shot Product Attribute-Value ExtractionCode0
Detecting Statements in Text: A Domain-Agnostic Few-Shot SolutionCode0
MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model for Few-Shot Instance SegmentationCode0
Kernel Relative-prototype Spectral Filtering for Few-shot LearningCode0
Descriptor and Word Soups: Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot LearningCode0
A Closer Look at the Training Strategy for Modern Meta-LearningCode0
A Simple Approach to Adversarial Robustness in Few-shot Image ClassificationCode0
Kajal: Extracting Grammar of a Source Code Using Large Language ModelsCode0
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language ModelsCode0
Training-Free Exponential Context Extension via Cascading KV CacheCode0
Joint Graph Learning and Model Fitting in Laplacian Regularized Stratified ModelsCode0
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLICode0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
MATE: Plugging in Model Awareness to Task Embedding for Meta LearningCode0
Acquiring Bidirectionality via Large and Small Language ModelsCode0
CFReID: Continual Few-shot Person Re-IdentificationCode0
Demonstration-based learning for few-shot biomedical named entity recognition under machine reading comprehensionCode0
Delta-encoder: an effective sample synthesis method for few-shot object recognitionCode0
Show:102550
← PrevPage 50 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified