SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 301350 of 2964 papers

TitleStatusHype
Integrating Large Language Models with Internet of Things Applications0
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts0
Exploring structure diversity in atomic resolution microscopy with graph neural networks0
Composing Diffusion Policies for Few-shot Learning of Movement Trajectories0
Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods0
Benchmarking Pathology Foundation Models: Adaptation Strategies and ScenariosCode0
Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification0
MI-VisionShot: Few-shot adaptation of vision-language models for slide-level classification of histopathological imagesCode0
EPIC: Efficient Position-Independent Caching for Serving Large Language Models0
FoMo: A Foundation Model for Mobile Traffic Forecasting with Diffusion Model0
A Prompt Refinement-based Large Language Model for Metro Passenger Flow Forecasting under Delay Conditions0
iFuzzyTL: Interpretable Fuzzy Transfer Learning for SSVEP BCI System0
Use Random Selection for Now: Investigation of Few-Shot Selection Strategies in LLM-based Text Augmentation for ClassificationCode0
A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning0
Neural networks that overcome classic challenges through practice0
GraphCLIP: Enhancing Transferability in Graph Foundation Models for Text-Attributed GraphsCode2
KNN Transformer with Pyramid Prompts for Few-Shot Learning0
Cross-Modal Few-Shot Learning: a Generative Transfer Learning Framework0
Large Model for Small Data: Foundation Model for Cross-Modal RF Human Activity Recognition0
Context-Aware SQL Error Correction Using Few-Shot Learning -- A Novel Approach Based on NLQ, Error, and SQL Similarity0
Cross-Domain Evaluation of Few-Shot Classification Models: Natural Images vs. Histopathological Images0
SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language ModelsCode1
Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions0
Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?0
DemoShapley: Valuation of Demonstrations for In-Context Learning0
OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model PromptingCode1
QCircuitNet: A Large-Scale Hierarchical Dataset for Quantum Algorithm DesignCode1
On The Relationship between Visual Anomaly-free and Anomalous Representations0
DCP: Learning Accelerator Dataflow for Neural Network via Propagation0
Exploring the Readiness of Prominent Small Language Models for the Democratization of Financial LiteracyCode0
Investigating Cost-Efficiency of LLM-Generated Training Data for Conversational Semantic Frame Analysis0
Generating Synthetic Datasets for Few-shot Prompt Tuning0
Manual Verbalizer Enrichment for Few-Shot Text Classification0
Efficient Few-shot Learning for Multi-label Classification of Scientific Documents with Many ClassesCode1
Neural-Bayesian Program Learning for Few-shot Dialogue Intent Parsing0
Bridging Modalities: Enhancing Cross-Modality Hate Speech Detection with Few-Shot In-Context Learning0
ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning0
A Cross-Lingual Meta-Learning Method Based on Domain Adaptation for Speech Emotion Recognition0
Revisiting In-context Learning Inference Circuit in Large Language ModelsCode0
Episodic fine-tuning prototypical networks for optimization-based few-shot learning: Application to audio classificationCode0
Adaptive Masking Enhances Visual GroundingCode0
CalliffusionV2: Personalized Natural Calligraphy Generation with Flexible Multi-modal Control0
Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot ClassificationCode0
SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation0
Towards a vision foundation model for comprehensive assessment of Cardiac MRI0
Auto-Demo Prompting: Leveraging Generated Outputs as Demonstrations for Enhanced Batch Prompting0
Intelligent Repetition Counting for Unseen Exercises: A Few-Shot Learning Approach with Sensor Signals0
From Natural Language to SQL: Review of LLM-based Text-to-SQL Systems0
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classificationCode0
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language ModelsCode0
Show:102550
← PrevPage 7 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified