SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 13011325 of 2964 papers

TitleStatusHype
Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot ExamplesCode0
TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning0
CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for Optimized Learning Fusion0
Small Language Models as Effective Guides for Large Language Models in Chinese Relation Extraction0
How Important is Domain Specificity in Language Models and Instruction Finetuning for Biomedical Relation Extraction?0
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models0
Few-shot clinical entity recognition in English, French and Spanish: masked language models outperform generative model prompting0
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data0
Modularized Networks for Few-shot Hateful Meme DetectionCode0
DeepCode AI Fix: Fixing Security Vulnerabilities with Large Language Models0
Grasping the Essentials: Tailoring Large Language Models for Zero-Shot Relation ExtractionCode0
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models0
Self-Augmented In-Context Learning for Unsupervised Word TranslationCode0
How Secure Are Large Language Models (LLMs) for Navigation in Urban Environments?0
Few-Shot Learning with Uncertainty-based Quadruplet Selection for Interference Classification in GNSS Data0
Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning0
Advancing Video Anomaly Detection: A Concise Review and a New Dataset0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
Exploring Low-Resource Medical Image Classification with Weakly Supervised Prompt Learning0
Rethinking Skill Extraction in the Job Market Domain using Large Language ModelsCode0
A Complete Survey on Contemporary Methods, Emerging Paradigms and Hybrid Approaches for Few-Shot Learning0
Automatic Combination of Sample Selection Strategies for Few-Shot Learning0
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
SynthDST: Synthetic Data is All You Need for Few-Shot Dialog State TrackingCode0
HyperPlanes: Hypernetwork Approach to Rapid NeRF AdaptationCode0
Show:102550
← PrevPage 53 of 119Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified