SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 451500 of 935 papers

TitleStatusHype
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression0
Is your LLM trapped in a Mental Set? Investigative study on how mental sets affect the reasoning capabilities of LLMs0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning0
Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models0
LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning0
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language Models0
TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for Parameter-Efficient Fine-Tuning0
Optimizing Language Models for Grammatical Acceptability: A Comparative Study of Fine-Tuning Techniques0
A Hessian-informed hyperparameter optimization for differential learning rate0
Speech Recognition for Automatically Assessing Afrikaans and isiXhosa Preschool Oral Narratives0
How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language AdaptersCode0
A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model0
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning0
MedFocusCLIP : Improving few shot classification in medical datasets using pixel wise attention0
Spectral-Aware Low-Rank Adaptation for Speaker VerificationCode0
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode0
Efficient Deployment of Large Language Models on Resource-constrained Devices0
tCURLoRA: Tensor CUR Decomposition Based Low-Rank Parameter Adaptation and Its Application in Medical Image Segmentation0
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation0
SoMA: Singular Value Decomposed Minor Components Adaptation for Domain Generalizable Representation Learning0
Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation0
pFedMxF: Personalized Federated Class-Incremental Learning with Mixture of Frequency Aggregation0
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning0
LoKi: Low-dimensional KAN for Efficient Fine-tuning Image Models0
F^3OCUS - Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
Rethinking Token Reduction with Parameter-Efficient Fine-Tuning in ViT for Pixel-Level TasksCode0
TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning0
Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning0
Sensitivity-Aware Efficient Fine-Tuning via Compact Dynamic-Rank Adaptation0
VELoRA: A Low-Rank Adaptation Approach for Efficient RGB-Event based RecognitionCode0
Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices0
KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep HashingCode0
Gradient Weight-normalized Low-rank Projection for Efficient LLM TrainingCode0
Interweaving Memories of a Siamese Large Language ModelCode0
Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-TuningCode0
LLMsAgainstHate @ NLU of Devanagari Script Languages 2025: Hate Speech Detection and Target Identification in Devanagari Languages via Parameter Efficient Fine-Tuning of LLMsCode0
Label Privacy in Split Learning for Large Models with Parameter-Efficient TrainingCode0
CustomTTT: Motion and Appearance Customized Video Generation via Test-Time TrainingCode0
FedPIA -- Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning0
GraphLoRA: Empowering LLMs Fine-Tuning via Graph Collaboration of MoE0
Parameter-efficient Fine-tuning for improved Convolutional Baseline for Brain Tumor Segmentation in Sub-Saharan Africa Adult Glioma DatasetCode0
Refining Salience-Aware Sparse Fine-Tuning Strategies for Language ModelsCode0
FarExStance: Explainable Stance Detection for FarsiCode0
Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFTCode0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
A LoRA is Worth a Thousand Pictures0
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
Show:102550
← PrevPage 10 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified