SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 2130 of 935 papers

TitleStatusHype
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust AdaptationCode3
S-LoRA: Serving Thousands of Concurrent LoRA AdaptersCode3
Compacter: Efficient Low-Rank Hypercomplex Adapter LayersCode3
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank AdaptationCode2
TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language ModelingCode2
Earth-Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency AdaptationCode2
Unlocking the Hidden Potential of CLIP in Generalizable Deepfake DetectionCode2
A Survey on Federated Fine-tuning of Large Language ModelsCode2
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization AlignmentCode2
SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMsCode2
Show:102550
← PrevPage 3 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified