SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 801850 of 935 papers

TitleStatusHype
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge DistillationCode0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
Low-Rank Interconnected Adaptation across LayersCode0
Parameter-Efficient Fine-Tuning of Vision Foundation Model for Forest Floor Segmentation from UAV ImageryCode0
Efficient Stitchable Task AdaptationCode0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
The effect of fine-tuning on language model toxicityCode0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Parameter-Efficient Fine-Tuning without Introducing New LatencyCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
LoRA-PT: Low-Rank Adapting UNETR for Hippocampus Segmentation Using Principal Tensor Singular Values and VectorsCode0
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource SettingsCode0
Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-TuningCode0
CiteCheck: Towards Accurate Citation Faithfulness DetectionCode0
LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via Gradient-Guided Perturbation OptimizationCode0
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-TuningCode0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation TasksCode0
When does Parameter-Efficient Transfer Learning Work for Machine Translation?Code0
PatchProt: Hydrophobic patch prediction using protein foundation modelsCode0
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank AdaptationCode0
Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-TuningCode0
PE-CLIP: A Parameter-Efficient Fine-Tuning of Vision Language Models for Dynamic Facial Expression RecognitionCode0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge InjectionCode0
Token Adaptation via Side Graph Convolution for Temporally and Spatially Efficient Fine-tuning of 3D Point Cloud TransformersCode0
Towards Infinite-Long Prefix in TransformerCode0
DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing MechanismCode0
PEFT for Speech: Unveiling Optimal Placement, Merging Strategies, and Ensemble TechniquesCode0
DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for Medical Image AnalysisCode0
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt TuningCode0
PEFT-U: Parameter-Efficient Fine-Tuning for User PersonalizationCode0
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language ModelsCode0
Capacity Control is an Effective Memorization Mitigation Mechanism in Text-Conditional Diffusion ModelsCode0
Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You NeedCode0
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsCode0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank DistributionCode0
Personalized LLM Response Generation with Parameterized Memory InjectionCode0
Leveraging Large Language Models for enzymatic reaction prediction and characterizationCode0
Domain-Inspired Sharpness-Aware Minimization Under Domain ShiftsCode0
Soft Language Prompts for Language TransferCode0
Adapting Shortcut With Normalizing Flow: An Efficient Tuning Framework for Visual RecognitionCode0
Black-Box Tuning of Vision-Language Models with Effective Gradient ApproximationCode0
Domain Expansion: Parameter-Efficient Modules as Building Blocks for Composite DomainsCode0
Show:102550
← PrevPage 17 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified