SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 76100 of 935 papers

TitleStatusHype
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Reasoning on a Budget: Miniaturizing DeepSeek R1 with SFT-GRPO Alignment for Instruction-Tuned LLMsCode1
Multi-Token Prediction Needs RegistersCode1
Vision Graph Prompting via Semantic Low-Rank DecompositionCode1
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
SpectrumFM: A Foundation Model for Intelligent Spectrum ManagementCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud LearningCode1
Efficient Self-Supervised Adaptation for Medical Image AnalysisCode1
MoST: Efficient Monarch Sparse Tuning for 3D Representation LearningCode1
SPMTrack: Spatio-Temporal Parameter-Efficient Fine-Tuning with Mixture of Experts for Scalable Visual TrackingCode1
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual LearningCode1
SALT: Singular Value Adaptation with Low-Rank TransformationCode1
Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam TasksCode1
Rethinking Few-Shot Adaptation of Vision-Language Models in Two StagesCode1
Revisiting semi-supervised learning in the era of foundation modelsCode1
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space ModelsCode1
R-LoRA: Random Initialization of Multi-Head LoRA for Multi-Task LearningCode1
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-TuningCode1
SSMLoRA: Enhancing Low-Rank Adaptation with State Space ModelCode1
Joint Localization and Activation Editing for Low-Resource Fine-TuningCode1
Parameter Efficient Fine-Tuning of Segment Anything ModelCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
Rethinking Addressing in Language Models via Contexualized Equivariant Positional EncodingCode1
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
Show:102550
← PrevPage 4 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified