SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 926935 of 935 papers

TitleStatusHype
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsCode0
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models0
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
Towards a Unified View of Parameter-Efficient Transfer LearningCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-modelsCode1
LoRA: Low-Rank Adaptation of Large Language ModelsCode2
Compacter: Efficient Low-Rank Hypercomplex Adapter LayersCode3
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared HypernetworksCode1
Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding0
Show:102550
← PrevPage 38 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified