SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 10511100 of 2964 papers

TitleStatusHype
Decoder Choice Network for Meta-LearningCode0
Dealing With Heterogeneous 3D MR Knee Images: A Federated Few-Shot Learning Method With Dual Knowledge DistillationCode0
Layer-Wise Feature Metric of Semantic-Pixel Matching for Few-Shot LearningCode0
Learning New Tasks from a Few Examples with Soft-Label PrototypesCode0
Dataset2Vec: Learning Dataset Meta-FeaturesCode0
Data-Efficient Language Shaped Few-shot Image ClassificationCode0
Adaptive Prototypical NetworksCode0
Data-Efficient Classification of Radio GalaxiesCode0
Knowledge Graph Transfer Network for Few-Shot RecognitionCode0
The Role of Data Curation in Image CaptioningCode0
Data Augmentation Generative Adversarial NetworksCode0
ALPaCA vs. GP-based Prior Learning: A Comparison between two Bayesian Meta-Learning AlgorithmsCode0
Kernel Relative-prototype Spectral Filtering for Few-shot LearningCode0
DAMSL: Domain Agnostic Meta Score-based LearningCode0
AutoProtoNet: Interpretability for Prototypical NetworksCode0
Joint Graph Learning and Model Fitting in Laplacian Regularized Stratified ModelsCode0
Kajal: Extracting Grammar of a Source Code Using Large Language ModelsCode0
Knowledge-Enhanced Multi-Label Few-Shot Product Attribute-Value ExtractionCode0
Automatic Generation of Fashion Images using Prompting in Generative Machine Learning ModelsCode0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
IntellectSeeker: A Personalized Literature Management System with the Probabilistic Model and Large Language ModelCode0
Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learningCode0
Adaptive Posterior Learning: few-shot learning with a surprise-based memory moduleCode0
Interactive Symbol Grounding with Complex Referential ExpressionsCode0
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLICode0
Instance-aware Prompt Learning for Language Understanding and GenerationCode0
SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by SimulationCode0
Inferring Latent Class Statistics from Text for Robust Visual Few-Shot LearningCode0
Instance-level Few-shot Learning with Class Hierarchy MiningCode0
Automated Few-shot Classification with Instruction-Finetuned Language ModelsCode0
Incremental Few-Shot Learning with Attention Attractor NetworksCode0
Instance Selection Mechanisms for Human-in-the-Loop Systems in Few-Shot LearningCode0
CrossMoCo: Multi-modal Momentum Contrastive Learning for Point CloudCode0
Karyotype AI for Precision OncologyCode0
Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot ExamplesCode0
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language ModelsCode0
Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-TuningCode0
Cross-lingual Approaches for the Detection of Adverse Drug Reactions in German from a Patient’s PerspectiveCode0
Cross-lingual Approaches for the Detection of Adverse Drug Reactions in German from a Patient's PerspectiveCode0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency ParsingCode0
Support-Set Context Matters for Bongard ProblemsCode0
Alleviating Exposure Bias via Multi-level Contrastive Learning and Deviation Simulation in Abstractive SummarizationCode0
Improving Meta-Learning Generalization with Activation-Based Early-StoppingCode0
Improving Few-Shot Inductive Learning on Temporal Knowledge Graphs using Confidence-Augmented Reinforcement LearningCode0
When Low Resource NLP Meets Unsupervised Language Model: Meta-pretraining Then Meta-learning for Few-shot Text ClassificationCode0
Cross-domain Multi-modal Few-shot Object Detection via Rich TextCode0
Improved Visually Prompted Keyword Localisation in Real Low-Resource SettingsCode0
Feature Extractor Stacking for Cross-domain Few-shot LearningCode0
Adaptive Masking Enhances Visual GroundingCode0
Show:102550
← PrevPage 22 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified