SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 211220 of 935 papers

TitleStatusHype
RandLoRA: Full-rank parameter-efficient fine-tuning of large models0
Joint Localization and Activation Editing for Low-Resource Fine-TuningCode1
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA0
Parameter Efficient Fine-Tuning of Segment Anything ModelCode1
Norm-Bounded Low-Rank Adaptation0
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.20
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation0
Fine Tuning without Catastrophic Forgetting via Selective Low Rank Adaptation0
LoRAGuard: An Effective Black-box Watermarking Approach for LoRAs0
Show:102550
← PrevPage 22 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified