SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 376400 of 935 papers

TitleStatusHype
IAPT: Instruction-Aware Prompt Tuning for Large Language Models0
ICL Markup: Structuring In-Context Learning using Soft-Token Tags0
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation0
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models0
HSACNet: Hierarchical Scale-Aware Consistency Regularized Semi-Supervised Change Detection0
Differentially Private Fine-Tuning of Diffusion Models0
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information0
House of Cards: Massive Weights in LLMs0
HM3: Heterogeneous Multi-Class Model Merging0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation0
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.20
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models0
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective0
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation0
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping0
HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization0
HD-PiSSA: High-Rank Distributed Orthogonal Adaptation0
Show:102550
← PrevPage 16 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified