SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 351375 of 935 papers

TitleStatusHype
Prompt Compression for Large Language Models: A SurveyCode1
Communication-Efficient and Tensorized Federated Fine-Tuning of Large Language Models0
LoRA Soups: Merging LoRAs for Practical Skill Composition TasksCode1
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models0
Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language ModelsCode0
Sequential LLM Framework for Fashion Recommendation0
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column UpdatesCode0
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation0
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning TasksCode1
MTL-LoRA: Low-Rank Adaptation for Multi-Task LearningCode1
DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned ModelsCode0
SLiM: One-shot Quantization and Sparsity with Low-rank Approximation for LLM Weight CompressionCode1
QEFT: Quantization for Efficient Fine-Tuning of LLMsCode0
Parameter-Efficient Fine-Tuning of State Space ModelsCode1
Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning0
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt TuningCode0
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation0
Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud LearningCode3
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture0
MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete DiffusionCode0
Parameter-Efficient Fine-Tuning via Selective Discrete Cosine Transform0
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP LayersCode0
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs0
Show:102550
← PrevPage 15 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified