SOTAVerified

Few-Shot Learning

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Papers

Showing 11011150 of 2964 papers

TitleStatusHype
Selective Vision-Language Subspace Projection for Few-shot CLIPCode0
A Large Encoder-Decoder Family of Foundation Models For Chemical LanguageCode0
Pre-Training and Prompting for Few-Shot Node Classification on Text-Attributed Graphs0
MedSAGa: Few-shot Memory Efficient Medical Image Segmentation using Gradient Low-Rank Projection in SAM0
A Comprehensive Review of Few-shot Action Recognition0
Automatic Generation of Fashion Images using Prompting in Generative Machine Learning ModelsCode0
PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision0
CellularLint: A Systematic Approach to Identify Inconsistent Behavior in Cellular Network Specifications0
Evaluating Linguistic Capabilities of Multimodal LLMs in the Lens of Few-Shot Learning0
A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification0
Reasoning with Large Language Models, a Survey0
Swiss DINO: Efficient and Versatile Vision Framework for On-device Personal Object SearchCode0
FsPONER: Few-shot Prompt Optimization for Named Entity Recognition in Domain-specific ScenariosCode0
Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion0
Identification of emotions on Twitter during the 2022 electoral process in Colombia0
Measuring Sustainability Intention of ESG Fund Disclosure using Few-Shot Learning0
Bringing Masked Autoencoders Explicit Contrastive Properties for Point Cloud Self-Supervised LearningCode0
Using Grammar Masking to Ensure Syntactic Validity in LLM-based Modeling Tasks0
Learning to Adapt Category Consistent Meta-Feature of CLIP for Few-Shot Classification0
Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic TransformationsCode0
Few-Shot Airway-Tree Modeling using Data-Driven Sparse Priors0
Argument Mining in Data Scarce Settings: Cross-lingual Transfer and Few-shot TechniquesCode0
Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation ModelsCode0
Fully Fine-tuned CLIP Models are Efficient Few-Shot Learners0
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning0
Neurocache: Efficient Vector Retrieval for Long-range Language ModelingCode0
Core Knowledge Learning Framework for Graph Adaptation and Scalability Learning0
Dynamic Few-Shot Learning for Knowledge Graph Question Answering0
Financial Knowledge Large Language Model0
Human-Free Automated Prompting for Vision-Language Anomaly Detection: Prompt Optimization with Meta-guiding Prompt Scheme0
Masked Generative Extractor for Synergistic Representation and 3D Generation of Point Clouds0
Exploring Factual Entailment with NLI: A News Media Study0
Exploring Cross-Domain Few-Shot Classification via Frequency-Aware PromptingCode0
AnnotatedTables: A Large Tabular Dataset with Language Model Annotations0
Training-Free Exponential Context Extension via Cascading KV CacheCode0
Evaluating the Effectiveness of the Foundational Models for Q&A Classification in Mental Health care0
Distributed Rule Vectors is A Key Mechanism in Large Language Models' In-Context Learning0
Review of Zero-Shot and Few-Shot AI Algorithms in The Medical Domain0
Sports Intelligence: Assessing the Sports Understanding Capabilities of Language Models through Question Answering from Text to Video0
Contextual Interaction via Primitive-based Adversarial Training For Compositional Zero-shot LearningCode0
VLM Agents Generate Their Own Memories: Distilling Experience into Embodied Programs of Thought0
Communication-Efficient and Privacy-Preserving Decentralized Meta-Learning0
Putting GPT-4o to the Sword: A Comprehensive Evaluation of Language, Vision, Speech, and Multimodal Proficiency0
IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning0
Using Multimodal Large Language Models for Automated Detection of Traffic Safety Critical Events0
VIRL: Volume-Informed Representation Learning towards Few-shot Manufacturability EstimationCode0
AnyTrans: Translate AnyText in the Image with Large Scale Models0
Mining Open Semantics from CLIP: A Relation Transition Perspective for Few-Shot Learning0
RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional InformationCode0
COOL: Comprehensive Knowledge Enhanced Prompt Learning for Domain Adaptive Few-shot Fake News Detection0
Show:102550
← PrevPage 23 of 60Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1gpt-4-0125-previewAccuracy61.91Unverified
2gpt-4-0125-previewAccuracy52.49Unverified
3gpt-3.5-turboAccuracy41.48Unverified
4gpt-3.5-turboAccuracy37.06Unverified
5johnsnowlabs/JSL-MedMNX-7BAccuracy25.63Unverified
6yikuan8/Clinical-LongformerAccuracy25.55Unverified
7BioMistral/BioMistral-7B-DAREAccuracy25.06Unverified
8yikuan8/Clinical-LongformerAccuracy25.04Unverified
9PharMolix/BioMedGPT-LM-7BAccuracy24.92Unverified
10PharMolix/BioMedGPT-LM-7BAccuracy24.75Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean67.27Unverified
2SaSPA + CAL4-shot Accuracy48.3Unverified
3Real-Guidance + CAL4-shot Accuracy41.5Unverified
4CAL4-shot Accuracy40.9Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CALHarmonic mean52.2Unverified
2CALHarmonic mean35.2Unverified
3Variational Prompt TuningHarmonic mean34.69Unverified
4Real-Guidance + CALHarmonic mean34.5Unverified
#ModelMetricClaimedVerifiedStatus
1BGNNAccuracy92.7Unverified
2TIM-GDAccuracy87.4Unverified
3UNEM-GaussianAccuracy66.4Unverified
#ModelMetricClaimedVerifiedStatus
1EASY (transductive)Accuracy82.75Unverified
2HCTransformers5 way 1~2 shot74.74Unverified
3HyperShotAccuracy53.18Unverified
#ModelMetricClaimedVerifiedStatus
1SaSPA + CAL4-shot Accuracy66.7Unverified
2Real-Guidance + CAL4-shot Accuracy44.3Unverified
3CAL4-shot Accuracy42.2Unverified
#ModelMetricClaimedVerifiedStatus
1HCTransformersAcc74.74Unverified
2DPGNAcc67.6Unverified
#ModelMetricClaimedVerifiedStatus
1MetaGen Blended RAG (zero-shot)Accuracy77.9Unverified
2CoT-T5-11B (1024 Shot)Accuracy73.42Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.44Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy68.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean77.71Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean81.12Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean91.57Unverified
#ModelMetricClaimedVerifiedStatus
1CovidExpertAUC-ROC1Unverified
#ModelMetricClaimedVerifiedStatus
1CoT-T5-11B (1024 Shot)Accuracy78.02Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy65.7Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy73.2Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean96.82Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean73.07Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean78.51Unverified
#ModelMetricClaimedVerifiedStatus
1UNEM-GaussianAccuracy52.3Unverified
#ModelMetricClaimedVerifiedStatus
1Variational Prompt TuningHarmonic mean79Unverified