SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 2130 of 935 papers

TitleStatusHype
MoRA: High-Rank Updating for Parameter-Efficient Fine-TuningCode3
Compacter: Efficient Low-Rank Hypercomplex Adapter LayersCode3
LoRA-GA: Low-Rank Adaptation with Gradient ApproximationCode3
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
InfLoRA: Interference-Free Low-Rank Adaptation for Continual LearningCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
An Embarrassingly Simple Approach for LLM with Strong ASR CapacityCode2
ClassWise-SAM-Adapter: Parameter Efficient Fine-tuning Adapts Segment Anything to SAR Domain for Semantic SegmentationCode2
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
Show:102550
← PrevPage 3 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified