SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 171180 of 935 papers

TitleStatusHype
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and InferenceCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
LoFiT: Localized Fine-tuning on LLM RepresentationsCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
A Prompt Learning Framework for Source Code SummarizationCode1
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early PruningCode1
Show:102550
← PrevPage 18 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified