SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 151175 of 935 papers

TitleStatusHype
Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language ModelsCode1
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language ModelsCode1
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector BanksCode1
Sparse Matrix in Large Language Model Fine-tuningCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and InferenceCode1
Spectral Adapter: Fine-Tuning in Spectral SpaceCode1
MeteoRA: Multiple-tasks Embedded LoRA for Large Language ModelsCode1
Parameter-Efficient Instance-Adaptive Neural Video CompressionCode1
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuningCode1
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical ReportCode1
Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic ForgettingCode1
Simple, Efficient and Scalable Structure-aware Adapter Boosts Protein Language ModelsCode1
Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation ModelCode1
Mixture of Low-rank Experts for Transferable AI-Generated Image DetectionCode1
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design ApproachCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task AdaptationCode1
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated ExpertsCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-TuningCode1
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
Show:102550
← PrevPage 7 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified