SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 251300 of 2964 papers

TitleStatusHype
DETReg: Unsupervised Pretraining with Region Priors for Object DetectionCode1
Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Generation for Few-shot LearningCode1
CMT in TREC-COVID Round 2: Mitigating the Generalization Gaps from Web to Special Domain SearchCode1
Diagnosing Infeasible Optimization Problems Using Large Language ModelsCode1
A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian RegularizationCode1
FETA: Towards Specializing Foundation Models for Expert Task ApplicationsCode1
FewSAR: A Few-shot SAR Image Classification BenchmarkCode1
CodeIE: Large Code Generation Models are Better Few-Shot Information ExtractorsCode1
A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series DataCode1
Adaptive Subspaces for Few-Shot LearningCode1
CLUES: Few-Shot Learning Evaluation in Natural Language UnderstandingCode1
Replication: Contrastive Learning and Data Augmentation in Traffic Classification Using a Flowpic Input RepresentationCode1
Few-shot Adaptation Works with UnpredicTable DataCode1
Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in Populus trichocarpaCode1
Few-shot Natural Language Generation for Task-Oriented DialogCode1
Fast Learning of Dynamic Hand Gesture Recognition with Few-Shot Learning ModelsCode1
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-trainingCode1
FDFtNet: Facing Off Fake Images using Fake Detection Fine-tuning NetworkCode1
Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequencesCode1
ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical NotesCode1
CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIPCode1
Feature Generation for Long-tail ClassificationCode1
Class-Aware Patch Embedding Adaptation for Few-Shot Image ClassificationCode1
FAITH: Few-Shot Graph Classification with Hierarchical Task GraphsCode1
FAPIS: A Few-shot Anchor-free Part-based Instance SegmenterCode1
All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph PretrainingCode1
Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the CentroidCode1
Class-Incremental Domain Adaptation with Smoothing and Calibration for Surgical Report GenerationCode1
Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive ProcessesCode1
Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot LearningCode1
Explanation-Guided Training for Cross-Domain Few-Shot ClassificationCode1
Exploring Efficient Few-shot Adaptation for Vision TransformersCode1
Channel Importance Matters in Few-Shot Image ClassificationCode1
EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter OptimizationCode1
Example-Based Named Entity RecognitionCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Can Explanations Be Useful for Calibrating Black Box Models?Code1
Evaluating Weakly Supervised Object Localization Methods RightCode1
ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot LearningCode1
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and DatasetsCode1
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural RenderingCode1
3D-IDS: Doubly Disentangled Dynamic Intrusion DetectionCode1
Calibrate Before Use: Improving Few-Shot Performance of Language ModelsCode1
CDFSL-V: Cross-Domain Few-Shot Learning for VideosCode1
Charting the Right Manifold: Manifold Mixup for Few-shot LearningCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
Overcoming challenges in leveraging GANs for few-shot data augmentationCode1
Chameleon: A MatMul-Free Temporal Convolutional Network Accelerator for End-to-End Few-Shot and Continual Learning from Sequential DataCode1
Expanding Event Modality Applications through a Robust CLIP-Based EncoderCode1
Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot LearningCode1
Show:102550
← PrevPage 6 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified