SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 351400 of 2964 papers

TitleStatusHype
Bias-Eliminated Semantic Refinement for Any-Shot LearningCode1
Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot Image ClassificationCode1
Class-Aware Patch Embedding Adaptation for Few-Shot Image ClassificationCode1
Finetune like you pretrain: Improved finetuning of zero-shot vision modelsCode1
Diversified in-domain synthesis with efficient fine-tuning for few-shot classificationCode1
Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot LearningCode1
FlipDA: Effective and Robust Data Augmentation for Few-Shot LearningCode1
Binocular Mutual Learning for Improving Few-shot ClassificationCode1
Class-Incremental Domain Adaptation with Smoothing and Calibration for Surgical Report GenerationCode1
BioBERT: a pre-trained biomedical language representation model for biomedical text miningCode1
EPCL: Frozen CLIP Transformer is An Efficient Point Cloud EncoderCode1
Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual NoiseCode1
Generalising via Meta-Examples for Continual Learning in the WildCode1
Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning ApproachCode1
Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without ForgettingCode1
Diffusion Mechanism in Residual Neural Network: Theory and ApplicationsCode1
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object InteractionsCode1
Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real WorldCode1
Discrete and Soft Prompting for Multilingual ModelsCode1
Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground AlignmentCode1
"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task TransferCode1
GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural NetworksCode1
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsCode1
GPSFormer: A Global Perception and Local Structure Fitting-based Transformer for Point Cloud UnderstandingCode1
Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Generation for Few-shot LearningCode1
DiffCLIP: Few-shot Language-driven Multimodal ClassifierCode1
Graph Prototypical Networks for Few-shot Learning on Attributed NetworksCode1
GraphQ IR: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate RepresentationCode1
Boosting on the shoulders of giants in quantum device calibrationCode1
Group Preference Optimization: Few-Shot Alignment of Large Language ModelsCode1
A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human LevelCode1
Hierarchical Attention Network for Few-Shot Object Detection via Meta-Contrastive LearningCode1
Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language InferenceCode1
BOIL: Towards Representation Change for Few-shot LearningCode1
Detecting Hate Speech with GPT-3Code1
Borrowing Knowledge From Pre-trained Language Model: A New Data-efficient Visual Learning ParadigmCode1
Advancing Fine-Grained Classification by Structure and Subject Preserving AugmentationCode1
Bridging Few-Shot Learning and Adaptation: New Challenges of Support-Query ShiftCode1
DETA: Denoised Task Adaptation for Few-Shot LearningCode1
Bridging Molecular Graphs and Large Language ModelsCode1
Hypernetwork approach to Bayesian MAMLCode1
Hyperseed: Unsupervised Learning with Vector Symbolic ArchitecturesCode1
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot LearningCode1
BSNet: Bi-Similarity Network for Few-shot Fine-grained Image ClassificationCode1
DETReg: Unsupervised Pretraining with Region Priors for Object DetectionCode1
IEPT: Instance-Level and Episode-Level Pretext Tasks for Few-Shot LearningCode1
Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language ModelsCode1
An Explanation of In-context Learning as Implicit Bayesian InferenceCode1
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural RenderingCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Show:102550
← PrevPage 8 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified