SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 11011150 of 2964 papers

TitleStatusHype
Joint Graph Learning and Model Fitting in Laplacian Regularized Stratified ModelsCode0
Cross-Domain Few-Shot Learning via Adaptive Transformer NetworksCode0
Kajal: Extracting Grammar of a Source Code Using Large Language ModelsCode0
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLICode0
Cross-Domain Cross-Set Few-Shot Learning via Learning Compact and Aligned RepresentationsCode0
Interactive Symbol Grounding with Complex Referential ExpressionsCode0
Interval Bound Interpolation for Few-shot Learning with Few TasksCode0
CrisisMatch: Semi-Supervised Few-Shot Learning for Fine-Grained Disaster Tweet ClassificationCode0
Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learningCode0
CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation Detection in Online CommunitiesCode0
IntellectSeeker: A Personalized Literature Management System with the Probabilistic Model and Large Language ModelCode0
Kernel Relative-prototype Spectral Filtering for Few-shot LearningCode0
Learning to Propagate for Graph Meta-LearningCode0
AttenWalker: Unsupervised Long-Document Question Answering via Attention-based Graph WalkingCode0
SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by SimulationCode0
Inferring Latent Class Statistics from Text for Robust Visual Few-Shot LearningCode0
Instance-aware Prompt Learning for Language Understanding and GenerationCode0
Few-shot Novel Category DiscoveryCode0
Few-Shot NLG with Pre-Trained Language ModelCode0
In-context Learning and Gradient Descent RevisitedCode0
Corrective In-Context Learning: Evaluating Self-Correction in Large Language ModelsCode0
Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-TuningCode0
Adaptive Gradient-Based Meta-Learning MethodsCode0
Incremental Few-Shot Learning with Attention Attractor NetworksCode0
Improving generalization in large language models by learning prefix subspacesCode0
When Low Resource NLP Meets Unsupervised Language Model: Meta-pretraining Then Meta-learning for Few-shot Text ClassificationCode0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Few-Shot Multilingual Open-Domain QA from 5 ExamplesCode0
Cooperative Bi-path Metric for Few-shot LearningCode0
Improving Few-Shot Inductive Learning on Temporal Knowledge Graphs using Confidence-Augmented Reinforcement LearningCode0
Improving Meta-Learning Generalization with Activation-Based Early-StoppingCode0
Improved transferability of self-supervised learning models through batch normalization finetuningCode0
Few-shot link prediction via graph neural networks for Covid-19 drug-repurposingCode0
CORE: A Retrieve-then-Edit Framework for Counterfactual Data GenerationCode0
A Large Encoder-Decoder Family of Foundation Models For Chemical LanguageCode0
Attentional Meta-learners for Few-shot Polythetic ClassificationCode0
Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language TasksCode0
Improved Visually Prompted Keyword Localisation in Real Low-Resource SettingsCode0
Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot ExamplesCode0
Instance-level Few-shot Learning with Class Hierarchy MiningCode0
IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained EnvironmentsCode0
Identifying Misinformation on YouTube through Transcript Contextual Analysis with Transformer ModelsCode0
HyperPlanes: Hypernetwork Approach to Rapid NeRF AdaptationCode0
A Language Agent for Autonomous DrivingCode0
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model CompressionCode0
A Transductive Multi-Head Model for Cross-Domain Few-Shot LearningCode0
Few-Shot Learning with Graph Neural NetworksCode0
HQP: A Human-Annotated Dataset for Detecting Online PropagandaCode0
Few-Shot Learning with Graph Neural NetworksCode0
Continuous max-flow augmentation of self-supervised few-shot learning on SPECT left ventriclesCode0
Show:102550
← PrevPage 23 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified