SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 18511900 of 2964 papers

TitleStatusHype
A MIMO Radar-Based Metric Learning Approach for Activity Recognition0
Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP0
Few-Shot Named Entity Recognition: An Empirical Baseline Study0
Learning Prototype Representations Across Few-Shot Tasks for Event DetectionCode0
TransPrompt: Towards an Automatic Transferable Prompting Framework for Few-shot Text ClassificationCode1
Towards Realistic Few-Shot Relation ExtractionCode1
Continual Few-Shot Learning for Text ClassificationCode0
Data-Efficient Language Shaped Few-shot Image ClassificationCode0
Influential Prototypical Networks for Few Shot Learning: A Dermatological Case Study0
Few-shot learning with improved local representations via bias rectify module0
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat MinimaCode1
MFNet: Multi-class Few-shot Segmentation Network with Pixel-wise Metric Learning0
MetaICL: Learning to Learn In ContextCode1
Domain Agnostic Few-Shot Learning For Document Intelligence0
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D PoseCode1
On sensitivity of meta-learning to support dataCode1
Non-Gaussian Gaussian Processes for Few-Shot RegressionCode1
Self-Denoising Neural Networks for Few Shot Learning0
Task-Aware Meta Learning-based Siamese Neural Network for Classifying Obfuscated Malware0
Meta-Learning for Multi-Label Few-Shot Classification0
Instant Response Few-shot Object Detection with Meta Strategy and Explicit Localization InferenceCode0
Simultaneous Perturbation Method for Multi-Task Weight Optimization in One-Shot Meta-LearningCode0
MaskSplit: Self-supervised Meta-learning for Few-shot Semantic SegmentationCode1
SCHA-VAE: Hierarchical Context Aggregation for Few-Shot GenerationCode1
GCCN: Global Context Convolutional Network0
A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning0
Contextual Gradient Scaling for Few-Shot LearningCode0
On Label-Efficient Computer Vision: Building Fast and Effective Few-Shot Image Classifiers0
Ortho-Shot: Low Displacement Rank Regularization with Data Augmentation for Few-Shot Learning0
Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot LearningCode1
PPT: Pre-trained Prompt Tuning for Few-shot Learning0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning0
Few-Shot Learning with Siamese Networks and Label Tuning0
A MIMO Radar-based Few-Shot Learning Approach for Human-ID0
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model CompressionCode0
Hyperseed: Unsupervised Learning with Vector Symbolic ArchitecturesCode1
Few-Shot Bot: Prompt-Based Learning for Dialogue SystemsCode1
Can Explanations Be Useful for Calibrating Black Box Models?Code1
Omni-Training: Bridging Pre-Training and Meta-Training for Few-Shot Learning0
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5Code1
Inconsistent Few-Shot Relation Classification via Cross-Attentional Prototype Networks with Contrastive Learning0
Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers0
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLICode0
A Closer Look at Prototype Classifier for Few-shot Image Classification0
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot LearningCode1
Injecting Text and Cross-lingual Supervision in Few-shot Learning from Self-Supervised Models0
Unsupervised Representation Learning Meets Pseudo-Label Supervised Self-Distillation: A New Approach to Rare Disease ClassificationCode0
Meta-Learning with Task-Adaptive Loss Function for Few-Shot LearningCode1
Sparse MoEs meet Efficient EnsemblesCode1
Show:102550
← PrevPage 38 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified