SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 851900 of 2964 papers

TitleStatusHype
LLM4Drive: A Survey of Large Language Models for Autonomous DrivingCode3
Few-shot time-series anomaly detection with unsupervised domain adaptation0
On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval0
STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction0
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement LearningCode1
Adaptive Anchor Label Propagation for Transductive Few-Shot LearningCode0
Pre-trained Recommender Systems: A Causal Debiasing PerspectiveCode0
Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning0
A Few-Shot Learning Focused Survey on Recent Named Entity Recognition and Relation Classification Methods0
Weakly-Supervised Surgical Phase Recognition0
PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven Perturbed Gradient Descent0
Zephyr: Direct Distillation of LM AlignmentCode5
Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning0
MyriadAL: Active Few Shot Learning for HistopathologyCode0
Improving generalization in large language models by learning prefix subspacesCode0
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 LanguagesCode0
CrisisMatch: Semi-Supervised Few-Shot Learning for Fine-Grained Disaster Tweet ClassificationCode0
Large Language Models are biased to overestimate profoundnessCode0
Are LSTMs Good Few-Shot Learners?Code0
On Bilingual Lexicon Induction with Large Language ModelsCode1
Unsupervised Representation Learning to Aid Semi-Supervised Meta LearningCode0
Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks0
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
Experimental Results of Underwater Sound Speed Profile Inversion by Few-shot Multi-task Learning0
Few-Shot In-Context Imitation Learning via Implicit Graph Alignment0
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine TranslationCode0
Group Preference Optimization: Few-Shot Alignment of Large Language ModelsCode1
Document-Level In-Context Few-Shot Relation Extraction via Pre-Trained Language ModelsCode1
Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real WorldCode1
LLM4SGG: Large Language Models for Weakly Supervised Scene Graph GenerationCode1
Few-Shot Learning Patterns in Financial Time-Series for Trend-Following StrategiesCode2
Leveraging Large Language Models for Node Generation in Few-Shot Learning on Text-Attributed GraphsCode1
In-Context Learning with Iterative Demonstration SelectionCode1
Configuration Validation with Large Language Models0
Plug-and-Play Feature Generation for Few-Shot Medical Image Classification0
In-Context Learning for Few-Shot Molecular Property Prediction0
Subspace Adaptation Prior for Few-Shot LearningCode0
LLM-augmented Preference Learning from Natural Language0
Language-Guided Reinforcement Learning for Hard Attention in Few-Shot Learning0
Argumentative Stance Prediction: An Exploratory Study on Multimodality and Few-Shot Learning0
Leveraging Twitter Data for Sentiment Analysis of Transit User Feedback: An NLP Framework0
NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time-Series PretrainingCode1
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt CompressionCode5
Model Tuning or Prompt Tuning? A Study of Large Language Models for Clinical Concept and Relation Extraction0
PatchProto Networks for Few-shot Visual Anomaly Classification0
Task Aware Modulation using Representation Learning: An Approach for Few Shot Learning in Environmental Systems0
A Holistic Evaluation of Piano Sound Quality0
UniPredict: Large Language Models are Universal Tabular Classifiers0
PrototypeFormer: Learning to Explore Prototype Relationships for Few-shot Image Classification0
Procedural Text Mining with Large Language ModelsCode0
Show:102550
← PrevPage 18 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified