SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 851875 of 935 papers

TitleStatusHype
Learn to Preserve and Diversify: Parameter-Efficient Group with Orthogonal Regularization for Domain GeneralizationCode0
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric LearningCode0
Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language ModelsCode0
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language ModelsCode0
DLP: Dynamic Layerwise Pruning in Large Language ModelsCode0
Label Privacy in Split Learning for Large Models with Parameter-Efficient TrainingCode0
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language ModelsCode0
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language ExplanationsCode0
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP LayersCode0
KIND: Knowledge Integration and Diversion for Training Decomposable ModelsCode0
SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-TuningCode0
Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning DynamicsCode0
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language ModelsCode0
Benchmarking Pathology Foundation Models: Adaptation Strategies and ScenariosCode0
Privacy-Preserved Automated Scoring using Federated Learning for Educational ResearchCode0
Deepfakes on Demand: the rise of accessible non-consensual deepfake image generatorsCode0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic ForgettingCode0
Sparsity May Be All You Need: Sparse Random Parameter AdaptationCode0
Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation ModelsCode0
Spectral-Aware Low-Rank Adaptation for Speaker VerificationCode0
Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion ModelsCode0
DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned ModelsCode0
KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep HashingCode0
Towards Real Zero-Shot Camouflaged Object Segmentation without Camouflaged AnnotationsCode0
Speech Translation Refinement using Large Language ModelsCode0
Show:102550
← PrevPage 35 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified