SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 901910 of 935 papers

TitleStatusHype
RocketPPA: Code-Level Power, Performance, and Area Prediction via LLM and Mixture of Experts0
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization0
SPD-CFL: Stepwise Parameter Dropout for Efficient Continual Federated Learning0
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation0
SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation0
SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction0
Scaled Prompt-Tuning for Few-Shot Natural Language Generation0
Scaling Laws for Forgetting When Fine-Tuning Large Language Models0
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization0
SC-LoRA: Balancing Efficient Fine-tuning and Knowledge Preservation via Subspace-Constrained LoRA0
Show:102550
← PrevPage 91 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified