SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 110 of 935 papers

TitleStatusHype
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language ModelsCode0
Optimising Language Models for Downstream Tasks: A Post-Training Perspective0
Progtuning: Progressive Fine-tuning Framework for Transformer-based Language Models0
WordCon: Word-level Typography Control in Scene Text Rendering0
Exploring Adapter Design Tradeoffs for Low Resource Music Generation0
ARD-LoRA: Dynamic Rank Allocation for Parameter-Efficient Fine-Tuning of Foundation Models with Heterogeneous Adaptation Needs0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
Show:102550
← PrevPage 1 of 94Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified