SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 376400 of 935 papers

TitleStatusHype
CROSSAN: Towards Efficient and Effective Adaptation of Multiple Multimodal Foundation Models for Sequential RecommendationCode0
DLP: Dynamic Layerwise Pruning in Large Language ModelsCode0
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language ModelsCode0
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in TransformersCode0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated LearningCode0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
Domain-Inspired Sharpness-Aware Minimization Under Domain ShiftsCode0
RCA: Region Conditioned Adaptation for Visual Abductive ReasoningCode0
Conversational Factor Information Retrieval Model (ConFIRM)Code0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy InterpolationCode0
AROMA: Autonomous Rank-one Matrix AdaptationCode0
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?Code0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
Interweaving Memories of a Siamese Large Language ModelCode0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classificationCode0
Low-Rank Interconnected Adaptation across LayersCode0
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
Show:102550
← PrevPage 16 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified