SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 2650 of 935 papers

TitleStatusHype
Uni-LoRA: One Vector is All You Need0
LoRA as a Flexible Framework for Securing Large Vision Systems0
Assortment of Attention Heads: Accelerating Federated PEFT with Head Pruning and Strategic Client Selection0
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
On Fairness of Task Arithmetic: The Role of Task Vectors0
Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models0
Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert0
Weight Spectra Induced Efficient Model Adaptation0
Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You NeedCode0
MAP: Revisiting Weight Decomposition for Low-Rank Adaptation0
Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning DynamicsCode0
SC-LoRA: Balancing Efficient Fine-tuning and Knowledge Preservation via Subspace-Constrained LoRA0
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning0
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective0
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
Permissioned LLMs: Enforcing Access Control in Large Language Models0
LoKI: Low-damage Knowledge Implanting of Large Language ModelsCode1
DLP: Dynamic Layerwise Pruning in Large Language ModelsCode0
LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning0
Parameter-Efficient Fine-Tuning with Column Space Projection0
UORA: Uniform Orthogonal Reinitialization Adaptation in Parameter-Efficient Fine-Tuning of Large Models0
Optimization-Inspired Few-Shot Adaptation for Large Language Models0
Universal Reasoner: A Single, Composable Plug-and-Play Reasoner for Frozen LLMsCode1
HD-PiSSA: High-Rank Distributed Orthogonal Adaptation0
Show:102550
← PrevPage 2 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified