SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 751760 of 935 papers

TitleStatusHype
Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning0
Extending Whisper with prompt tuning to target-speaker ASRCode1
ICL Markup: Structuring In-Context Learning using Soft-Token Tags0
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
Aligner: One Global Token is Worth Millions of Parameters When Aligning Large Language Models0
Fine-tuning vision foundation model for crack segmentation in civil infrastructures0
Customizable Combination of Parameter-Efficient Modules for Multi-Task Learning0
MoSA: Mixture of Sparse Adapters for Visual Efficient TuningCode1
mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUsCode2
Show:102550
← PrevPage 76 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified