SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 5175 of 935 papers

TitleStatusHype
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of ParametersCode2
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?Code2
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank AdaptationCode2
mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUsCode2
LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image RestorationCode2
LoRA: Low-Rank Adaptation of Large Language ModelsCode2
FairMedFM: Fairness Benchmarking for Medical Imaging Foundation ModelsCode2
Tracking Meets LoRA: Faster Training, Larger Model, Stronger PerformanceCode2
InfLoRA: Interference-Free Low-Rank Adaptation for Continual LearningCode2
ClassWise-SAM-Adapter: Parameter Efficient Fine-tuning Adapts Segment Anything to SAR Domain for Semantic SegmentationCode2
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
Any2Point: Empowering Any-modality Large Models for Efficient 3D UnderstandingCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language ModelsCode2
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
Low-Rank Quantization-Aware Training for LLMsCode2
Parameter-Efficient Fine-Tuning with Discrete Fourier TransformCode2
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-TuningCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
FedJudge: Federated Legal Large Language ModelCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
Ferret: Federated Full-Parameter Tuning at Scale for Large Language ModelsCode1
Extending Whisper with prompt tuning to target-speaker ASRCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Show:102550
← PrevPage 3 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified