SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 176200 of 935 papers

TitleStatusHype
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
Extending Whisper with prompt tuning to target-speaker ASRCode1
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-TuningCode1
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-TuningCode1
A Comprehensive Analysis of Adapter EfficiencyCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language ModelsCode1
Parameter Efficient Fine-tuning via Explained Variance AdaptationCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
MEFT: Memory-Efficient Fine-Tuning through Sparse AdapterCode1
Domain Generalization Using Large Pretrained Models with Mixture-of-AdaptersCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA RegionCode1
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and InferenceCode1
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of AdaptersCode1
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
LoRA Soups: Merging LoRAs for Practical Skill Composition TasksCode1
Show:102550
← PrevPage 8 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified