SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 331340 of 935 papers

TitleStatusHype
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
NLoRA: Nyström-Initiated Low-Rank Adaptation for Large Language ModelsCode0
Deepfakes on Demand: the rise of accessible non-consensual deepfake image generatorsCode0
Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation ModelsCode0
Gradient Weight-normalized Low-rank Projection for Efficient LLM TrainingCode0
Gradient Inversion Attacks on Parameter-Efficient Fine-TuningCode0
15,500 Seconds: Lean UAV Classification Leveraging PEFT and Pre-Trained NetworksCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural NetworkCode0
Show:102550
← PrevPage 34 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified