SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 251260 of 935 papers

TitleStatusHype
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Expanding Sparse Tuning for Low Memory UsageCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Generative Parameter-Efficient Fine-TuningCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language ModelsCode1
Parameter-Efficient Fine-Tuning of State Space ModelsCode1
Show:102550
← PrevPage 26 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified