SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 276300 of 935 papers

TitleStatusHype
A LoRA is Worth a Thousand Pictures0
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers0
Adaptive Principal Components Allocation with the _2,g-regularized Gaussian Graphical Model for Efficient Fine-Tuning Large ModelsCode0
CrackESS: A Self-Prompting Crack Segmentation System for Edge Devices0
PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition0
BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation0
Sequential Compression Layers for Efficient Federated Learning in Foundational Models0
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
PETapter: Leveraging PET-style classification heads for modular few-shot parameter-efficient fine-tuning0
QueEn: A Large Language Model for Quechua-English Translation0
SoRA: Singular Value Decomposed Low-Rank Adaptation for Domain Generalizable Representation LearningCode2
Streaming Detection of Queried Event StartCode0
CPP-UT-Bench: Can LLMs Write Complex Unit Tests in C++?0
Mixture of Physical Priors Adapter for Parameter-Efficient Fine-Tuning0
LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization0
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis0
Unified Parameter-Efficient Unlearning for LLMsCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
Enhancing Parameter-Efficient Fine-Tuning of Vision Transformers through Frequency-Based AdaptationCode0
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models0
Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative TradingCode2
Show:102550
← PrevPage 12 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified