SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 841850 of 935 papers

TitleStatusHype
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation0
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification0
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
PEFTDebias : Capturing debiasing information using PEFTs0
PEFT-MedAware: Large Language Model for Medical Awareness0
PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models0
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning0
PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model0
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization0
Show:102550
← PrevPage 85 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified