SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 326350 of 935 papers

TitleStatusHype
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing0
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning0
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models0
Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization0
LayerNorm: A key component in parameter-efficient fine-tuning0
Bilevel ZOFO: Bridging Parameter-Efficient and Zeroth-Order Techniques for Efficient LLM Fine-Tuning and Meta-Training0
Aligner: One Global Token is Worth Millions of Parameters When Aligning Large Language Models0
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation0
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning0
A Hessian-informed hyperparameter optimization for differential learning rate0
Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models0
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning0
Dual Decomposition of Weights and Singular Value Low Rank Adaptation0
Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation0
Beyond LoRA: Exploring Efficient Fine-Tuning Techniques for Time Series Foundational Models0
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers0
Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMs0
BeamLoRA: Beam-Constraint Low-Rank Adaptation0
Ahead-of-Time P-Tuning0
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM0
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models0
LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image0
Show:102550
← PrevPage 14 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified