SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 801825 of 935 papers

TitleStatusHype
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
Low-Rank Interconnected Adaptation across LayersCode0
Parameter-Efficient Fine-Tuning of Vision Foundation Model for Forest Floor Segmentation from UAV ImageryCode0
Efficient Stitchable Task AdaptationCode0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
The effect of fine-tuning on language model toxicityCode0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Parameter-Efficient Fine-Tuning without Introducing New LatencyCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
LoRA-PT: Low-Rank Adapting UNETR for Hippocampus Segmentation Using Principal Tensor Singular Values and VectorsCode0
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource SettingsCode0
Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-TuningCode0
CiteCheck: Towards Accurate Citation Faithfulness DetectionCode0
LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via Gradient-Guided Perturbation OptimizationCode0
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-TuningCode0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation TasksCode0
When does Parameter-Efficient Transfer Learning Work for Machine Translation?Code0
PatchProt: Hydrophobic patch prediction using protein foundation modelsCode0
Show:102550
← PrevPage 33 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified