SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 226250 of 935 papers

TitleStatusHype
Scaling & Shifting Your Features: A New Baseline for Efficient Model TuningCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
Generative Parameter-Efficient Fine-TuningCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
C2A: Client-Customized Adaptation for Parameter-Efficient Federated LearningCode1
SORSA: Singular Values and Orthonormal Regularized Singular Vectors Adaptation of Large Language ModelsCode1
Sparse is Enough in Fine-tuning Pre-trained Large Language ModelsCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting ModelsCode1
MoSA: Mixture of Sparse Adapters for Visual Efficient TuningCode1
SPMTrack: Spatio-Temporal Parameter-Efficient Fine-Tuning with Mixture of Experts for Scalable Visual TrackingCode1
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space ModelsCode1
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language ModelsCode1
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
KIF: Knowledge Identification and Fusion for Language Model Continual LearningCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
Towards a General Framework for Continual Learning with Pre-trainingCode1
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning TasksCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
Show:102550
← PrevPage 10 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified