SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 801850 of 2964 papers

TitleStatusHype
ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent Approach0
Corrective In-Context Learning: Evaluating Self-Correction in Large Language ModelsCode0
Sparseformer: a Transferable Transformer with Multi-granularity Token Sparsification for Medical Time Series Classification0
Conjuring Positive Pairs for Efficient Unification of Representation Learning and Image Synthesis0
Riemannian Geometric-based Meta Learning0
Optimizing Large Language Models for Detecting Symptoms of Comorbid Depression or Anxiety in Chronic Diseases: Insights from Patient Messages0
DRESS: Disentangled Representation-based Self-Supervised Meta-Learning for Diverse TasksCode0
Membership Inference Attacks fueled by Few-Short Learning to detect privacy leakage tackling data integrity0
From Dataset to Real-world: General 3D Object Detection via Generalized Cross-domain Few-shot Learning0
Evaluation of the Automated Labeling Method for Taxonomic Nomenclature Through Prompt-Optimized Large Language Model0
Memory Is All You Need: Testing How Model Memory Affects LLM Performance in Annotation Tasks0
Rethinking Few-Shot Medical Image Segmentation by SAM2: A Training-Free Framework with Augmentative Prompting and Dynamic Matching0
Use Me Wisely: AI-Driven Assessment for LLM Prompting Skills Development0
Malware Classification from Memory Dumps Using Machine Learning, Transformers, and Large Language Models0
Frankenstein Optimizer: Harnessing the Potential by Revisiting Optimization TricksCode0
ExpertGenQA: Open-ended QA generation in Specialized Domains0
Network Traffic Classification Using Machine Learning, Transformer, and Large Language Models0
Enhancing Multi-hop Reasoning in Vision-Language Models via Self-Distillation with Multi-Prompt Ensembling0
Diversity Covariance-Aware Prompt Learning for Vision-Language Models0
Transformer Based Self-Context Aware Prediction for Few-Shot Anomaly Detection in Videos0
Learning to Animate Images from A Few Videos to Portray Delicate Human Actions0
LADs: Leveraging LLMs for AI-Driven DevOps0
Large Language Models as Attribution Regularizers for Efficient Model TrainingCode0
Few-Shot Multilingual Open-Domain QA from 5 ExamplesCode0
An Autonomous Network Orchestration Framework Integrating Large Language Models with Continual Reinforcement Learning0
A Similarity Paradigm Through Textual Regularization Without Forgetting0
Dual-level Mixup for Graph Few-shot Learning with Fewer TasksCode0
Retrieving Versus Understanding Extractive Evidence in Few-Shot Learning0
UniMatch: Universal Matching from Atom to Task for Few-Shot Drug DiscoveryCode0
RM-PoT: Reformulating Mathematical Problems and Solving via Program of Thoughts0
Do we still need Human Annotators? Prompting Large Language Models for Aspect Sentiment Quad PredictionCode0
RIDE: Enhancing Large Language Model Alignment through Restyled In-Context Learning Demonstration ExemplarsCode0
SEM-CLIP: Precise Few-Shot Learning for Nanoscale Defect Detection in Scanning Electron Microscope Image0
A Hybrid Model for Few-Shot Text Classification Using Transfer and Meta-Learning0
Cancer Vaccine Adjuvant Name Recognition from Biomedical Literature using Large Language ModelsCode0
A Flag Decomposition for Hierarchical DatasetsCode0
Is LLM an Overconfident Judge? Unveiling the Capabilities of LLMs in Detecting Offensive Language with Annotation DisagreementCode0
WatchGuardian: Enabling User-Defined Personalized Just-in-Time Intervention on Smartwatch0
Transforming Multimodal Models into Action Models for Radiotherapy0
OmniRL: In-Context Reinforcement Learning by Large-Scale Meta-Training in Randomized Worlds0
RoboGrasp: A Universal Grasping Policy for Robust Robotic Control0
FewTopNER: Integrating Few-Shot Learning with Topic Modeling and Named Entity Recognition in a Multilingual FrameworkCode0
Can LLMs Assist Annotators in Identifying Morality Frames? -- Case Study on Vaccination Debate on Social Media0
An Analysis of LLM Fine-Tuning and Few-Shot Learning for Flaky Test Detection and Classification0
Learning to Learn Weight Generation via Local Consistency Diffusion0
Differentially Private In-context Learning via Sampling Few-shot Mixed with Zero-shot Outputs0
Memory-Efficient Fine-Tuning of Transformers via Token SelectionCode0
ISAM-MTL: Cross-subject multi-task learning model with identifiable spikes and associative memory networks0
Unraveling the Capabilities of Language Models in News SummarizationCode0
Distilling Large Language Models for Network Active Queue Management0
Show:102550
← PrevPage 17 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified