SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 401450 of 935 papers

TitleStatusHype
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsCode0
SBoRA: Low-Rank Adaptation with Regional Weight UpdatesCode0
MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete DiffusionCode0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep HashingCode0
MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-TuningCode0
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy InterpolationCode0
AROMA: Autonomous Rank-one Matrix AdaptationCode0
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?Code0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classificationCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
Label Privacy in Split Learning for Large Models with Parameter-Efficient TrainingCode0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
Orchid2024: A cultivar-level dataset and methodology for fine-grained classification of Chinese Cymbidium OrchidsCode0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
FarExStance: Explainable Stance Detection for FarsiCode0
CoLA: Collaborative Low-Rank AdaptationCode0
Learn to Preserve and Diversify: Parameter-Efficient Group with Orthogonal Regularization for Domain GeneralizationCode0
ColA: Collaborative Adaptation with Gradient LearningCode0
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large ModelsCode0
Low-Rank Interconnected Adaptation across LayersCode0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
CLIP-SLA: Parameter-Efficient CLIP Adaptation for Continuous Sign Language RecognitionCode0
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table TransformersCode0
Exploring Sparsity for Parameter Efficient Fine Tuning Using WaveletsCode0
Speech Translation Refinement using Large Language ModelsCode0
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt TuningCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
CLIP-IT: CLIP-based Pairing for Histology Images ClassificationCode0
CLaDMoP: Learning Transferrable Models from Successful Clinical Trials via LLMsCode0
LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via Gradient-Guided Perturbation OptimizationCode0
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuningCode0
LoRA-PT: Low-Rank Adapting UNETR for Hippocampus Segmentation Using Principal Tensor Singular Values and VectorsCode0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need0
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning0
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
Show:102550
← PrevPage 9 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified