SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 5175 of 935 papers

TitleStatusHype
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning0
CLaDMoP: Learning Transferrable Models from Successful Clinical Trials via LLMsCode0
HD-PiSSA: High-Rank Distributed Orthogonal Adaptation0
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models0
Explain Less, Understand More: Jargon Detection via Personalized Parameter-Efficient Fine-tuning0
Representation Discrepancy Bridging Method for Remote Sensing Image-Text Retrieval0
15,500 Seconds: Lean UAV Classification Leveraging PEFT and Pre-Trained NetworksCode0
4,500 Seconds: Small Data Training Approaches for Deep UAV Audio ClassificationCode0
Few-Shot Adversarial Low-Rank Fine-Tuning of Vision-Language Models0
AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs0
CoLA: Collaborative Low-Rank AdaptationCode0
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
VP Lab: a PEFT-Enabled Visual Prompting Laboratory for Semantic Segmentation0
Parameter-Efficient Fine-Tuning of Multispectral Foundation Models for Hyperspectral Image Classification0
Privacy Preserving Conversion Modeling in Data Clean Room0
Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability HypothesisCode1
Dual Decomposition of Weights and Singular Value Low Rank Adaptation0
OSoRA: Output-Dimension and Singular-Value Initialized Low-Rank Adaptation0
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Efficient Federated Class-Incremental Learning of Pre-Trained Models via Task-agnostic Low-rank Residual Adaptation0
Adaptive parameter-efficient fine-tuning via Hessian-informed subset selection0
SRLoRA: Subspace Recomposition in Low-Rank Adaptation via Importance-Based Fusion and Reinitialization0
Exploring Sparsity for Parameter Efficient Fine Tuning Using WaveletsCode0
Parameter Efficient Continual Learning with Dynamic Low-Rank Adaptation0
Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation0
Show:102550
← PrevPage 3 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified