SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 921930 of 935 papers

TitleStatusHype
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning0
Singular Value Fine-tuning for Few-Shot Class-Incremental Learning0
Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning0
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture0
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models0
SOLIDO: A Robust Watermarking Method for Speech Synthesis via Low-Rank Adaptation0
SoMA: Singular Value Decomposed Minor Components Adaptation for Domain Generalizable Representation Learning0
SPAFIT: Stratified Progressive Adaptation Fine-tuning for Pre-trained Large Language Models0
Sparsely Shared LoRA on Whisper for Child Speech Recognition0
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning0
Show:102550
← PrevPage 93 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified