SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 501525 of 935 papers

TitleStatusHype
PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition0
CrackESS: A Self-Prompting Crack Segmentation System for Edge Devices0
BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation0
Sequential Compression Layers for Efficient Federated Learning in Foundational Models0
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
QueEn: A Large Language Model for Quechua-English Translation0
PETapter: Leveraging PET-style classification heads for modular few-shot parameter-efficient fine-tuning0
Streaming Detection of Queried Event StartCode0
Mixture of Physical Priors Adapter for Parameter-Efficient Fine-Tuning0
CPP-UT-Bench: Can LLMs Write Complex Unit Tests in C++?0
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis0
LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization0
Enhancing Parameter-Efficient Fine-Tuning of Vision Transformers through Frequency-Based AdaptationCode0
PEFT-as-an-Attack! Jailbreaking Language Models during Federated Parameter-Efficient Fine-Tuning0
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models0
Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning0
Towards Efficient Model-Heterogeneity Federated Learning for Large Models0
Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
LoRA-Mini : Adaptation Matrices Decomposition and Selective Training0
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement0
Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation0
Visual Cue Enhancement and Dual Low-Rank Adaptation for Efficient Visual Instruction Fine-Tuning0
Show:102550
← PrevPage 21 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified