SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 10511100 of 2964 papers

TitleStatusHype
CrossMoCo: Multi-modal Momentum Contrastive Learning for Point CloudCode0
The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues0
EMO: Episodic Memory Optimization for Few-Shot Meta-Learning0
A New Dataset and Empirical Study for Sentence Simplification in ChineseCode0
GSHOT: Few-shot Generative Modeling of Labeled GraphsCode0
Few Shot Rationale Generation using Self-Training with Dual Teachers0
Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language0
Retrieval-Enhanced Visual Prompt Learning for Few-shot Classification0
Few-Shot Open-Set Learning for On-Device Customization of KeyWord Spotting SystemsCode1
Consistency-guided Prompt Learning for Vision-Language ModelsCode1
Analyzing Text Representations by Measuring Task Alignment0
Measuring the Robustness of NLP Models to Domain ShiftsCode0
Catalysis distillation neural network for the few shot open catalyst challenge0
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?0
Conceptual Design Generation Using Large Language ModelsCode0
SENet: A Spectral Filtering Approach to Represent Exemplars for Few-shot Learning0
Task-Equivariant Graph Few-shot LearningCode1
Epistemic Graph: A Plug-And-Play Module For Hybrid Representation Learning0
Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target0
Deeply Coupled Cross-Modal Prompt LearningCode1
The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image ClassificationCode1
Adapting Language-Audio Models as Few-Shot Audio Learners0
Learning to Learn from APIs: Black-Box Data-Free Meta-LearningCode1
Transfer Learning for Power Outage Detection Task with Limited Training Data0
Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and Spanish-MixtecCode0
Instance-based Max-margin for Practical Few-shot Recognition0
On convex decision regions in deep network representationsCode0
ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-TranslationCode0
Zero-shot Approach to Overcome Perturbation Sensitivity of PromptsCode0
Training on Thin Air: Improve Image Classification with Generated DataCode1
Sentiment Analysis in the Era of Large Language Models: A Reality CheckCode1
A Survey of Diffusion Models in Natural Language Processing0
LAraBench: Benchmarking Arabic AI with Large Language Models0
Improving few-shot learning-based protein engineering with evolutionary samplingCode1
Are Large Language Models Robust Coreference Resolvers?Code0
Do prompt positions really matter?Code0
Few-Shot Data Synthesis for Open Domain Multi-Hop Question Answering0
Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language TasksCode0
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-TuningCode2
Active Learning Principles for In-Context Learning with Large Language Models0
Improving Factuality and Reasoning in Language Models through Multiagent DebateCode2
A Rational Model of Dimension-reduced Human Categorization0
Small Language Models Improve Giants by Rewriting Their OutputsCode1
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?Code1
Automated Few-shot Classification with Instruction-Finetuned Language ModelsCode0
PointGPT: Auto-regressively Generative Pre-training from Point CloudsCode2
A Weak Supervision Approach for Few-Shot Aspect Based Sentiment0
Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution AlignmentCode1
HMSN: Hyperbolic Self-Supervised Learning by Clustering with Ideal Prototypes0
MetaGAD: Meta Representation Adaptation for Few-Shot Graph Anomaly DetectionCode0
Show:102550
← PrevPage 22 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified