SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 811820 of 935 papers

TitleStatusHype
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in TransformersCode0
LoTR: Low Tensor Rank Weight Adaptation0
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model0
X-PEFT: eXtremely Parameter-Efficient Fine-Tuning for Extreme Multi-Profile Scenarios0
The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness0
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation0
FinSQL: Model-Agnostic LLMs-based Text-to-SQL Framework for Financial Analysis0
OrchMoE: Efficient Multi-Adapter Learning with Task-Skill Synergy0
Adapters Mixup: Mixing Parameter-Efficient Adapters to Enhance the Adversarial Robustness of Fine-tuned Pre-trained Text Classifiers0
Show:102550
← PrevPage 82 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified