SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 151200 of 2964 papers

TitleStatusHype
MedConceptsQA: Open Source Medical Concepts QA BenchmarkCode1
UniFS: Universal Few-shot Instance Perception with Point RepresentationsCode1
AAPL: Adding Attributes to Prompt Learning for Vision-Language ModelsCode1
Large Language Models Can Automatically Engineer Features for Few-Shot Tabular LearningCode1
The Devil is in the Few Shots: Iterative Visual Knowledge Completion for Few-shot LearningCode1
Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language ModelsCode1
MatchSeg: Towards Better Segmentation via Reference Image MatchingCode1
Enhancing Vision-Language Few-Shot Adaptation with Negative LearningCode1
TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Semantic TasksCode1
ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical NotesCode1
Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot LearningCode1
Few-shot Learner Parameterization by Diffusion Time-stepsCode1
Enhancing Information Maximization with Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot LearningCode1
Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot LearningCode1
Generative Pretrained Hierarchical Transformer for Time Series ForecastingCode1
Parameter-efficient Prompt Learning for 3D Point Cloud UnderstandingCode1
SportQA: A Benchmark for Sports Understanding in Large Language ModelsCode1
In-Context Learning Demonstration Selection via Influence AnalysisCode1
All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph PretrainingCode1
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation ModelsCode1
BECLR: Batch Enhanced Contrastive Few-Shot LearningCode1
A Survey of Few-Shot Learning on Graphs: from Meta-Learning to Pre-Training and Prompt LearningCode1
On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio ClassificationCode1
Reviving Undersampling for Long-Tailed LearningCode1
Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt Learning with Data-Dependent PriorCode1
DeIL: Direct-and-Inverse CLIP for Open-World Few-Shot LearningCode1
A Prompt Learning Framework for Source Code SummarizationCode1
Self-Supervised Learning for Few-Shot Bird Sound ClassificationCode1
MetaScript: Few-Shot Handwritten Chinese Content Generation via Generative Adversarial NetworksCode1
ProS: Prompting-to-simulate Generalized knowledge for Universal Cross-Domain RetrievalCode1
Extending Context Window of Large Language Models via Semantic CompressionCode1
Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4Code1
Few-Shot Class-Incremental Learning via Training-Free Prototype CalibrationCode1
Parameter-Efficient Transfer Learning of Audio Spectrogram TransformersCode1
Diversified in-domain synthesis with efficient fine-tuning for few-shot classificationCode1
Evaluating General Purpose Vision Foundation Models for Medical Image Analysis: An Experimental Study of DINOv2 on Radiology BenchmarksCode1
D^2ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for Few-shot Action RecognitionCode1
Simple Semantic-Aided Few-Shot LearningCode1
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural RenderingCode1
ID-like Prompt Learning for Few-Shot Out-of-Distribution DetectionCode1
Understanding the Role of Textual Prompts in LLM for Time Series Forecasting: an Adapter ViewCode1
Point Cloud Self-supervised Learning via 3D to Multi-view Masked AutoencoderCode1
Multilingual Mathematical AutoformalizationCode1
Meta-Adapter: An Online Few-shot Learner for Vision-Language ModelCode1
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement LearningCode1
On Bilingual Lexicon Induction with Large Language ModelsCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
Group Preference Optimization: Few-Shot Alignment of Large Language ModelsCode1
Document-Level In-Context Few-Shot Relation Extraction via Pre-Trained Language ModelsCode1
Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real WorldCode1
Show:102550
← PrevPage 4 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified