SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 871880 of 935 papers

TitleStatusHype
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM0
Text-guided High-definition Consistency Texture Model0
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language ModelsCode1
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based AlignmentCode0
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language ModelsCode0
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs0
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models0
MasakhaNEWS: News Topic Classification for African languagesCode1
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNsCode1
Show:102550
← PrevPage 88 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified