SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 526550 of 935 papers

TitleStatusHype
Personalized Pieces: Efficient Personalized Large Language Models through Collaborative EffortsCode1
Parameter-Efficient Active Learning for Foundational models0
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation0
Updating CLIP to Prefer Descriptions Over CaptionsCode0
A Survey of Recent Backdoor Attacks and Defenses in Large Language Models0
A Parameter-efficient Language Extension Framework for Multilingual ASR0
Low-Rank Quantization-Aware Training for LLMsCode2
An Improved Empirical Fisher Approximation for Natural Gradient Descent0
Efficient Differentially Private Fine-Tuning of Diffusion Models0
MEFT: Memory-Efficient Fine-Tuning through Sparse AdapterCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuningCode2
Time Sensitive Knowledge Editing through Efficient Finetuning0
Hypernetworks for Personalizing ASR to Atypical Speech0
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early PruningCode1
VHDL-Eval: A Framework for Evaluating Large Language Models in VHDL Code Generation0
Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need0
Adapter-X: A Novel General Parameter-Efficient Fine-Tuning Framework for Vision0
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing ImagesCode0
SwitchLoRA: Switched Low-Rank Adaptation Can Learn Full-Rank Information0
LoFiT: Localized Fine-tuning on LLM RepresentationsCode1
Differentially Private Fine-Tuning of Diffusion Models0
Mamba State-Space Models Are Lyapunov-Stable Learners0
Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion ModelsCode0
SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation0
Show:102550
← PrevPage 22 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified