SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 17011750 of 2964 papers

TitleStatusHype
Automated Few-Shot Time Series Forecasting based on Bi-level Programming0
ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer for Event-Centric Generation and ClassificationCode0
FewSense, Towards a Scalable and Cross-Domain Wi-Fi Sensing System Using Few-Shot Learning0
MetaDT: Meta Decision Tree with Class Hierarchy for Interpretable Few-Shot Learning0
Vision-Language Intelligence: Tasks, Representation Learning, and Large Models0
Anomaly Detection-Inspired Few-Shot Medical Image Segmentation Through Self-Supervision With SupervoxelsCode1
Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language GenerationCode0
Interpretable Concept-based Prototypical Networks for Few-Shot Learning0
CampNet: Context-Aware Mask Prediction for End-to-End Text-Based Speech EditingCode2
How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot Learning?0
Towards better understanding and better generalization of few-shot classification in histology images with contrastive learningCode1
Semantically Proportional Patchmix for Few-Shot Learning0
P4E: Few-Shot Event Detection as Prompt-Guided Identification and Localization0
Task-Adaptive Feature Transformer with Semantic Enrichment for Few-Shot Segmentation0
Cross Domain Few-Shot Learning via Meta Adversarial Training0
A Modern Self-Referential Weight Matrix That Learns to Modify ItselfCode1
Bias-Eliminated Semantic Refinement for Any-Shot LearningCode1
Generating Training Data with Language Models: Towards Zero-Shot Language UnderstandingCode1
Cedille: A large autoregressive French language modelCode2
MAML and ANIL Provably Learn Representations0
Few-shot Learning as Cluster-induced Voronoi Diagrams: A Geometric ApproachCode0
Exemplar-Based Contrastive Self-Supervised Learning with Few-Shot Class Incremental Learning0
Smoothed Embeddings for Certified Few-Shot LearningCode0
Advances in MetaDL: AAAI 2021 challenge and workshop0
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot DifficultyCode1
Similarity Learning based Few Shot Learning for ECG Time Series ClassificationCode1
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language ModelCode3
Mobile Robot Manipulation using Pure Object DetectionCode1
The Effect of Diversity in Meta-LearningCode0
Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequencesCode1
Ontology-enhanced Prompt-tuning for Few-shot Learning0
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and LanguagesCode1
EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple IngredientsCode1
Instance-aware Prompt Learning for Language Understanding and GenerationCode0
When Facial Expression Recognition Meets Few-Shot Learning: A Joint and Alternate Learning Framework0
FrLove : Could a Frenchman rapidly identify Lovecraft?0
Understanding Few-Shot Multi-Task Representation Learning Theory0
Prototypical Representation Learning for Low-resource Knowledge Extraction: Summary and Perspective0
Prior Knowledge for Few-shot Learning—Inductive Reasoning and Distribution Calibration0
Representation Change in Model-Agnostic Meta-Learning0
Template-free Prompt Tuning for Few-shot NER0
In-BoXBART: Get Instructions into Biomedical Multi-task Learning0
RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot LearningCode0
MetaICL: Learning to Learn In Context0
Label-guided Data Augmentation for Prompt-based Few Shot Learners0
ProQA: Structural Prompt-based Pre-training for Unified Question Answering0
Exploring Example Selection for Few-shot Text-to-SQL Semantic Parsing0
Do Prompt-Based Models Really Understand the Meaning of Their Prompts?0
Few-Shot Authorship Attribution in English Reddit Posts0
STT: Soft Template Tuning for Few-Shot Learning0
Show:102550
← PrevPage 35 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified