SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 701750 of 2964 papers

TitleStatusHype
How Important is Domain Specificity in Language Models and Instruction Finetuning for Biomedical Relation Extraction?0
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data0
Few-shot clinical entity recognition in English, French and Spanish: masked language models outperform generative model prompting0
Me LLaMA: Foundation Large Language Models for Medical ApplicationsCode2
DeepCode AI Fix: Fixing Security Vulnerabilities with Large Language Models0
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network GenerationCode2
Modularized Networks for Few-shot Hateful Meme DetectionCode0
In-Context Learning Demonstration Selection via Influence AnalysisCode1
Grasping the Essentials: Tailoring Large Language Models for Zero-Shot Relation ExtractionCode0
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical DomainsCode2
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models0
All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph PretrainingCode1
Self-Augmented In-Context Learning for Unsupervised Word TranslationCode0
How Secure Are Large Language Models (LLMs) for Navigation in Urban Environments?0
Few-Shot Learning with Uncertainty-based Quadruplet Selection for Interference Classification in GNSS Data0
Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning0
Advancing Video Anomaly Detection: A Concise Review and a New Dataset0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
Rethinking Skill Extraction in the Job Market Domain using Large Language ModelsCode0
Exploring Low-Resource Medical Image Classification with Weakly Supervised Prompt Learning0
Large Language Models to Enhance Bayesian OptimizationCode2
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation ModelsCode1
Automatic Combination of Sample Selection Strategies for Few-Shot Learning0
A Complete Survey on Contemporary Methods, Emerging Paradigms and Hybrid Approaches for Few-Shot Learning0
BECLR: Batch Enhanced Contrastive Few-Shot LearningCode1
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
SynthDST: Synthetic Data is All You Need for Few-Shot Dialog State TrackingCode0
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue AbilitiesCode5
HyperPlanes: Hypernetwork Approach to Rapid NeRF AdaptationCode0
On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio ClassificationCode1
A Survey of Few-Shot Learning on Graphs: from Meta-Learning to Pre-Training and Prompt LearningCode1
SymbolicAI: A framework for logic-based approaches combining generative models and solversCode5
EEG-GPT: Exploring Capabilities of Large Language Models for EEG Classification and Interpretation0
Episodic-free Task Selection for Few-shot Learning0
Reviving Undersampling for Long-Tailed LearningCode1
Few and Fewer: Learning Better from Few Examples Using Fewer Base ClassesCode0
Cross-Domain Few-Shot Learning via Adaptive Transformer NetworksCode0
Fine-grained Contract NER using instruction based modelCode0
LDCA: Local Descriptors with Contextual Augmentation for Few-Shot Learning0
It's About Time: Incorporating Temporality in Retrieval Augmented Language Models0
From Random to Informed Data Selection: A Diversity-Based Approach to Optimize Human Annotation and Few-Shot Learning0
Growing from Exploration: A self-exploring framework for robots based on foundation models0
Generating Zero-shot Abstractive Explanations for Rumour VerificationCode0
Training microrobots to swim by a large language model0
Identifying and Analyzing Task-Encoding Tokens in Large Language Models0
Named Entity Recognition Under Domain Shift via Metric Learning for Life SciencesCode0
Leveraging Biases in Large Language Models: "bias-kNN'' for Effective Few-Shot Learning0
Few-shot learning for COVID-19 Chest X-Ray Classification with Imbalanced Data: An Inter vs. Intra Domain StudyCode0
Few-Shot Learning for Chronic Disease Management: Leveraging Large Language Models and Multi-Prompt Engineering with Medical Knowledge Injection0
Show:102550
← PrevPage 15 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified