SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 701710 of 935 papers

TitleStatusHype
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation0
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
Open-Vocabulary Calibration for Fine-tuned CLIPCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning0
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric LearningCode0
Riemannian Preconditioned LoRA for Fine-Tuning Foundation ModelsCode1
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A SurveyCode4
Show:102550
← PrevPage 71 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified