SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 671680 of 935 papers

TitleStatusHype
Inducing Generalization across Languages and Tasks using Featurized Low-Rank Mixtures0
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models0
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
MIP: CLIP-based Image Reconstruction from PEFT Gradients0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization0
Multimodal Instruction Tuning with Conditional Mixture of LoRACode1
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning0
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?0
Show:102550
← PrevPage 68 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified