SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 251300 of 935 papers

TitleStatusHype
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning0
Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning0
Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation0
Sensitivity-Aware Efficient Fine-Tuning via Compact Dynamic-Rank Adaptation0
SoMA: Singular Value Decomposed Minor Components Adaptation for Domain Generalizable Representation Learning0
pFedMxF: Personalized Federated Class-Incremental Learning with Mixture of Frequency Aggregation0
TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning0
Rethinking Addressing in Language Models via Contexualized Equivariant Positional EncodingCode1
VELoRA: A Low-Rank Adaptation Approach for Efficient RGB-Event based RecognitionCode0
Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices0
KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep HashingCode0
Gradient Weight-normalized Low-rank Projection for Efficient LLM TrainingCode0
Interweaving Memories of a Siamese Large Language ModelCode0
LLMsAgainstHate @ NLU of Devanagari Script Languages 2025: Hate Speech Detection and Target Identification in Devanagari Languages via Parameter Efficient Fine-Tuning of LLMsCode0
Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-TuningCode0
Label Privacy in Split Learning for Large Models with Parameter-Efficient TrainingCode0
CustomTTT: Motion and Appearance Customized Video Generation via Test-Time TrainingCode0
FedPIA -- Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning0
GraphLoRA: Empowering LLMs Fine-Tuning via Graph Collaboration of MoE0
Refining Salience-Aware Sparse Fine-Tuning Strategies for Language ModelsCode0
Parameter-efficient Fine-tuning for improved Convolutional Baseline for Brain Tumor Segmentation in Sub-Saharan Africa Adult Glioma DatasetCode0
FarExStance: Explainable Stance Detection for FarsiCode0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFTCode0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
A LoRA is Worth a Thousand Pictures0
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
CrackESS: A Self-Prompting Crack Segmentation System for Edge Devices0
PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition0
BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation0
Sequential Compression Layers for Efficient Federated Learning in Foundational Models0
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
PETapter: Leveraging PET-style classification heads for modular few-shot parameter-efficient fine-tuning0
QueEn: A Large Language Model for Quechua-English Translation0
SoRA: Singular Value Decomposed Low-Rank Adaptation for Domain Generalizable Representation LearningCode2
Streaming Detection of Queried Event StartCode0
CPP-UT-Bench: Can LLMs Write Complex Unit Tests in C++?0
Mixture of Physical Priors Adapter for Parameter-Efficient Fine-Tuning0
LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization0
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis0
Unified Parameter-Efficient Unlearning for LLMsCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
Enhancing Parameter-Efficient Fine-Tuning of Vision Transformers through Frequency-Based AdaptationCode0
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models0
Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative TradingCode2
Show:102550
← PrevPage 6 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified