SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 351400 of 935 papers

TitleStatusHype
Improving LoRA in Privacy-preserving Federated Learning0
BeamLoRA: Beam-Constraint Low-Rank Adaptation0
Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning0
Ahead-of-Time P-Tuning0
Improving Domain Adaptation through Extended-Text Reading Comprehension0
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression0
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?0
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates0
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement0
Balancing Stability and Plasticity in Pretrained Detector: A Dual-Path Framework for Incremental Object Detection0
A GEN AI Framework for Medical Note Generation0
LoRA ensembles for large language model fine-tuning0
LoRAGuard: An Effective Black-box Watermarking Approach for LoRAs0
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model0
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into Multi-Task Transformers for Sequence Labelling0
AutoPsyC: Automatic Recognition of Psychodynamic Conflicts from Semi-structured Interviews with Large Language Models0
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters0
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models0
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential Low-Rank Matrix Adaptation0
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models0
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation0
Hypernetworks for Personalizing ASR to Atypical Speech0
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks0
HyperTuning: Toward Adapting Large Language Models without Back-propagation0
Mixed Text Recognition with Efficient Parameter Fine-Tuning and Transformer0
IAPT: Instruction-Aware Prompt Tuning for Large Language Models0
ICL Markup: Structuring In-Context Learning using Soft-Token Tags0
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation0
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models0
HSACNet: Hierarchical Scale-Aware Consistency Regularized Semi-Supervised Change Detection0
Differentially Private Fine-Tuning of Diffusion Models0
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information0
House of Cards: Massive Weights in LLMs0
HM3: Heterogeneous Multi-Class Model Merging0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation0
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.20
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models0
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective0
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation0
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping0
HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization0
HD-PiSSA: High-Rank Distributed Orthogonal Adaptation0
Show:102550
← PrevPage 8 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified