SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 19512000 of 2964 papers

TitleStatusHype
Using dependency parsing for few-shot learning in distributional semantics0
On the Economics of Multilingual Few-shot Learning: Modeling the Cost-Performance Trade-offs of Machine Translated and Manual Data0
Towards Answering Open-ended Ethical Quandary Questions0
Feature Extractor Stacking for Cross-domain Few-shot LearningCode0
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning0
Towards Unified Prompt Tuning for Few-shot Text Classification0
ALLSH: Active Learning Guided by Local Sensitivity and Hardness0
Privacy Enhancement for Cloud-Based Few-Shot LearningCode0
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering0
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt TuningCode0
Generalized Knowledge Distillation via Relationship MatchingCode0
Lifelong Ensemble Learning based on Multiple Representations for Few-Shot Object Recognition0
Local Stochastic Bilevel Optimization with Momentum-Based Variance Reduction0
Improving In-Context Few-Shot Learning via Self-Supervised Training0
On the generalization capabilities of FSL methods through domain adaptation: a case study in endoscopic kidney stone image classification0
Medical Coding with Biomedical Transformer Ensembles and Zero/Few-shot Learning0
Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task0
Dual Context-Guided Continuous Prompt Tuning for Few-Shot Learning0
EICO: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization0
English-Malay Word Embeddings Alignment for Cross-lingual Emotion Classification with Hierarchical Attention Network0
Knowledge Distillation Meets Few-Shot Learning: An Approach for Few-Shot Intent Classification Within and Across Domains0
Zero- and Few-Shot NLP with Pretrained Language Models0
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing0
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model0
Executive Function: A Contrastive Value Policy for Resampling and Relabeling Perceptions via Hindsight Summarization?0
Meta-free few-shot learning via representation learning with weight averaging0
Function-words Enhanced Attention Networks for Few-Shot Inverse Relation Classification0
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks0
Few-Shot Speaker Identification Using Depthwise Separable Convolutional Network with Channel Attention0
Few-Shot Object Detection with Proposal Balance Refinement0
Zero and Few-shot Learning for Author Profiling0
Few-shot learning for medical text: A systematic review0
Active Few-Shot Learning with FASLCode0
Less than Few: Self-Shot Video Instance Segmentation0
A Study on Prompt-based Few-Shot Learning Methods for Belief State Tracking in Task-oriented Dialog Systems0
Learning Compositional Representations for Effective Low-Shot Generalization0
Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited tripletsCode0
Multi-Modal Few-Shot Object Detection with Meta-Learning-Based Cross-Modal Prompting0
Impossible Triangle: What's Next for Pre-trained Language Models?0
GDC- Generalized Distribution Calibration for Few-Shot Learning0
A Simple Approach to Adversarial Robustness in Few-shot Image ClassificationCode0
MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text Classification0
Powering Finetuning in Few-Shot Learning: Domain-Agnostic Bias Reduction with Selected Sampling0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
AutoProtoNet: Interpretability for Prototypical NetworksCode0
On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting0
Selecting task with optimal transport self-supervised learning for few-shot classification0
Leveraging pre-trained language models for conversational information seeking from text0
Enabling hand gesture customization on wrist-worn devices0
Supervised Graph Contrastive Learning for Few-shot Node Classification0
Show:102550
← PrevPage 40 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified