SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 101125 of 935 papers

TitleStatusHype
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuningCode1
Less Could Be Better: Parameter-efficient Fine-tuning Advances Medical Vision Foundation ModelsCode1
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-TuningCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
Efficient Self-Supervised Adaptation for Medical Image AnalysisCode1
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-TuningCode1
LoKI: Low-damage Knowledge Implanting of Large Language ModelsCode1
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion ModelsCode1
Advancing Parameter Efficiency in Fine-tuning via Representation EditingCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
Show:102550
← PrevPage 5 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified