SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 401425 of 935 papers

TitleStatusHype
Conversational Factor Information Retrieval Model (ConFIRM)Code0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy InterpolationCode0
AROMA: Autonomous Rank-one Matrix AdaptationCode0
KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep HashingCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?Code0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classificationCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
Music for All: Representational Bias and Cross-Cultural Adaptability of Music Generation ModelsCode0
Low-Rank Interconnected Adaptation across LayersCode0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
LoRA-PT: Low-Rank Adapting UNETR for Hippocampus Segmentation Using Principal Tensor Singular Values and VectorsCode0
FarExStance: Explainable Stance Detection for FarsiCode0
CoLA: Collaborative Low-Rank AdaptationCode0
ColA: Collaborative Adaptation with Gradient LearningCode0
Learn to Preserve and Diversify: Parameter-Efficient Group with Orthogonal Regularization for Domain GeneralizationCode0
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large ModelsCode0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via Gradient-Guided Perturbation OptimizationCode0
Show:102550
← PrevPage 17 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified