SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 351400 of 2964 papers

TitleStatusHype
Model-Agnostic Few-Shot Open-Set RecognitionCode1
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image ClassificationCode1
Channel Importance Matters in Few-Shot Image ClassificationCode1
NatGen: Generative pre-training by "Naturalizing" source codeCode1
Rethinking Generalization in Few-Shot ClassificationCode1
Metric Based Few-Shot Graph ClassificationCode1
POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution SamplesCode1
Few-Shot Learning by Dimensionality Reduction in Gradient SpaceCode1
HyperMAML: Few-Shot Adaptation of Deep Models with HypernetworksCode1
Few-Shot Diffusion ModelsCode1
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained ModelsCode1
Prompt-aligned Gradient for Prompt TuningCode1
Easter2.0: Improving convolutional models for handwritten text recognitionCode1
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object InteractionsCode1
Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge TransferCode1
Learning Dialogue Representations from Consecutive UtterancesCode1
GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate RepresentationCode1
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsCode1
Region-Aware Metric Learning for Open World Semantic Segmentation via Meta-Channel AggregationCode1
ProQA: Structural Prompt-based Pre-training for Unified Question AnsweringCode1
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning Under the Support-Query ShiftCode1
FAITH: Few-Shot Graph Classification with Hierarchical Task GraphsCode1
Generating Representative Samples for Few-Shot ClassificationCode1
Few-Shot Document-Level Relation ExtractionCode1
The ICML 2022 Expressive Vocalizations Workshop and Competition: Recognizing, Generating, and Personalizing Vocal BurstsCode1
POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance DetectionCode1
Prompt-free and Efficient Few-shot Learning with Language ModelsCode1
Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language ModelsCode1
Look Closer to Supervise Better: One-Shot Font Generation via Component-Based DiscriminatorCode1
Realistic Evaluation of Transductive Few-Shot LearningCode1
Data Distributional Properties Drive Emergent In-Context Learning in TransformersCode1
Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a DifferenceCode1
Few-shot Learning with Noisy LabelsCode1
Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot ClassificationCode1
BankNote-Net: Open dataset for assistive universal currency recognitionCode1
Universal Representations: A Unified Look at Multiple Task and Domain LearningCode1
MetaAudio: A Few-Shot Audio Classification BenchmarkCode1
Too Big to Fail? Active Few-Shot Learning Guided Logic SynthesisCode1
PERFECT: Prompt-free and Efficient Few-shot Learning with Language ModelsCode1
Inverse is Better! Fast and Accurate Prompt for Few-shot Slot TaggingCode1
kNN-NER: Named Entity Recognition with Nearest Neighbor SearchCode1
Overcoming challenges in leveraging GANs for few-shot data augmentationCode1
Integrative Few-Shot Learning for Classification and SegmentationCode1
WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen Language ModelsCode1
Few-Shot Learning with Siamese Networks and Label TuningCode1
A Rationale-Centric Framework for Human-in-the-loop Machine LearningCode1
HyperShot: Few-Shot Learning by Kernel HyperNetworksCode1
Global Convergence of MAML and Theory-Inspired Neural Architecture Search for Few-Shot LearningCode1
Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot LearningCode1
Label Semantics for Few Shot Named Entity RecognitionCode1
Show:102550
← PrevPage 8 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified