SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 15511600 of 2964 papers

TitleStatusHype
Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language UnderstandingCode0
Channel-Spatial-Based Few-Shot Bird Sound Event Detection0
DocumentNet: Bridging the Data Gap in Document Pre-Training0
Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models0
Inductive Linear Probing for Few-shot Node Classification0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Rethink the Effectiveness of Text Data Augmentation: An Empirical Analysis0
Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise Reduction0
FLamE: Few-shot Learning from Natural Language Explanations0
Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning0
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing0
Leveraging Large Language Models for Scalable Vector Graphics-Driven Image UnderstandingCode0
EMO: Episodic Memory Optimization for Few-Shot Meta-Learning0
CrossMoCo: Multi-modal Momentum Contrastive Learning for Point CloudCode0
The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues0
A New Dataset and Empirical Study for Sentence Simplification in ChineseCode0
GSHOT: Few-shot Generative Modeling of Labeled GraphsCode0
Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language0
Few Shot Rationale Generation using Self-Training with Dual Teachers0
Retrieval-Enhanced Visual Prompt Learning for Few-shot Classification0
Analyzing Text Representations by Measuring Task Alignment0
Catalysis distillation neural network for the few shot open catalyst challenge0
Measuring the Robustness of NLP Models to Domain ShiftsCode0
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?0
Conceptual Design Generation Using Large Language ModelsCode0
SENet: A Spectral Filtering Approach to Represent Exemplars for Few-shot Learning0
Epistemic Graph: A Plug-And-Play Module For Hybrid Representation Learning0
Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target0
Adapting Language-Audio Models as Few-Shot Audio Learners0
Transfer Learning for Power Outage Detection Task with Limited Training Data0
Instance-based Max-margin for Practical Few-shot Recognition0
Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and Spanish-MixtecCode0
ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-TranslationCode0
On convex decision regions in deep network representationsCode0
Zero-shot Approach to Overcome Perturbation Sensitivity of PromptsCode0
LAraBench: Benchmarking Arabic AI with Large Language Models0
A Survey of Diffusion Models in Natural Language Processing0
Do prompt positions really matter?Code0
Few-Shot Data Synthesis for Open Domain Multi-Hop Question Answering0
Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language TasksCode0
Active Learning Principles for In-Context Learning with Large Language Models0
Are Large Language Models Robust Coreference Resolvers?Code0
A Rational Model of Dimension-reduced Human Categorization0
Automated Few-shot Classification with Instruction-Finetuned Language ModelsCode0
A Weak Supervision Approach for Few-Shot Aspect Based Sentiment0
HMSN: Hyperbolic Self-Supervised Learning by Clustering with Ideal Prototypes0
MetaGAD: Meta Representation Adaptation for Few-Shot Graph Anomaly DetectionCode0
Large Language Models Leverage External Knowledge to Extend Clinical Insight Beyond Language Boundaries0
Exploring the Space of Key-Value-Query Models with Intention0
CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation Detection in Online CommunitiesCode0
Show:102550
← PrevPage 32 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified