SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 29012950 of 2964 papers

TitleStatusHype
Simulated Annealing in Early Layers Leads to Better GeneralizationCode0
Federated Few-shot Learning for Cough Classification with Edge DevicesCode0
FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image ClassificationCode0
Boosting Point-BERT by Multi-choice TokensCode0
Few-Shot Learning with Localization in Realistic SettingsCode0
E-Sort: Empowering End-to-end Neural Network for Multi-channel Spike Sorting with Transfer Learning and Fast Post-processingCode0
Extensively Matching for Few-shot Learning Event DetectionCode0
Exploring the Similarity of Representations in Model-Agnostic Meta-LearningCode0
ALPaCA vs. GP-based Prior Learning: A Comparison between two Bayesian Meta-Learning AlgorithmsCode0
Coloring With Limited Data: Few-Shot Colorization via Memory Augmented NetworksCode0
Universal Music Representations? Evaluating Foundation Models on World Music CorporaCode0
Alleviating Exposure Bias via Multi-level Contrastive Learning and Deviation Simulation in Abstractive SummarizationCode0
Exploring the Readiness of Prominent Small Language Models for the Democratization of Financial LiteracyCode0
Exploring the Limits of Natural Language Inference Based Setup for Few-Shot Intent DetectionCode0
Can In-context Learners Learn a Reasoning Concept from Demonstrations?Code0
Pragmatic Competence Evaluation of Large Language Models for the Korean LanguageCode0
CoLLIE: Continual Learning of Language Grounding from Language-Image EmbeddingsCode0
Predicting the Accuracy of a Few-Shot ClassifierCode0
A Transductive Multi-Head Model for Cross-Domain Few-Shot LearningCode0
Small Sample Hyperspectral Image Classification Based on the Random Patches Network and Recursive FilteringCode0
Pre-Finetuning for Few-Shot Emotional Speech RecognitionCode0
Preserving Fine-Grain Feature Information in Classification via Entropic RegularizationCode0
Towards Cross-Lingual Audio Abuse Detection in Low-Resource Settings with Few-Shot LearningCode0
Pre-trained Recommender Systems: A Causal Debiasing PerspectiveCode0
Pre-trained Token-replaced Detection Model as Few-shot LearnerCode0
SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional PoliciesCode0
Exploring Self-Supervised Vision Transformers for Deepfake Detection: A Comparative AnalysisCode0
Exploring Cross-Domain Few-Shot Classification via Frequency-Aware PromptingCode0
Exploiting Causality Signals in Medical Images: A Pilot Study with Empirical ResultsCode0
Smoothed Embeddings for Certified Few-Shot LearningCode0
Collect and Select: Semantic Alignment Metric Learning for Few-Shot LearningCode0
EVA-X: A Foundation Model for General Chest X-ray Analysis with Self-supervised LearningCode0
Privacy Enhancement for Cloud-Based Few-Shot LearningCode0
A Task-aware Dual Similarity Network for Fine-grained Few-shot LearningCode0
A Few-Shot Attention Recurrent Residual U-Net for Crack SegmentationCode0
COCA: Classifier-Oriented Calibration via Textual Prototype for Source-Free Universal Domain AdaptationCode0
Procedural Text Mining with Large Language ModelsCode0
Solving the Baby Intuitions Benchmark with a Hierarchically Bayesian Theory of MindCode0
Coarse-To-Fine Incremental Few-Shot LearningCode0
Program synthesis performance constrained by non-linear spatial relations in Synthetic Visual Reasoning TestCode0
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classificationCode0
A Feature Generator for Few-Shot LearningCode0
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine TranslationCode0
Evaluating and Improving Graph to Text Generation with Large Language ModelsCode0
Coarsely-Labeled Data for Better Few-Shot TransferCode0
C-Norm: a neural approach to few-shot entity normalizationCode0
Clustered-patch Element Connection for Few-shot LearningCode0
SpatialFormer: Semantic and Target Aware Attentions for Few-Shot LearningCode0
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot LearnersCode0
A Large Encoder-Decoder Family of Foundation Models For Chemical LanguageCode0
Show:102550
← PrevPage 59 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified