SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 150 of 935 papers

TitleStatusHype
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-TuningCode9
Segment Any Text: A Universal Approach for Robust, Efficient and Adaptable Sentence SegmentationCode7
LongLoRA: Efficient Fine-tuning of Long-Context Large Language ModelsCode6
QLoRA: Efficient Finetuning of Quantized LLMsCode6
Astraios: Parameter-Efficient Instruction Tuning Code Large Language ModelsCode5
Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language ModelsCode4
NeMo-Aligner: Scalable Toolkit for Efficient Model AlignmentCode4
DoRA: Weight-Decomposed Low-Rank AdaptationCode4
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A SurveyCode4
Efficient Few-Shot Learning Without PromptsCode4
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningCode4
Vision-Speech Models: Teaching Speech Models to Converse about ImagesCode3
Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud LearningCode3
SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image SegmentationCode3
A Survey on LoRA of Large Language ModelsCode3
LoRA-GA: Low-Rank Adaptation with Gradient ApproximationCode3
Low-Rank Few-Shot Adaptation of Vision-Language ModelsCode3
MoRA: High-Rank Updating for Parameter-Efficient Fine-TuningCode3
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-TuningCode3
Hi-SAM: Marrying Segment Anything Model for Hierarchical Text SegmentationCode3
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust AdaptationCode3
S-LoRA: Serving Thousands of Concurrent LoRA AdaptersCode3
Compacter: Efficient Low-Rank Hypercomplex Adapter LayersCode3
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank AdaptationCode2
TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language ModelingCode2
Earth-Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency AdaptationCode2
Unlocking the Hidden Potential of CLIP in Generalizable Deepfake DetectionCode2
A Survey on Federated Fine-tuning of Large Language ModelsCode2
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization AlignmentCode2
SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMsCode2
Parameter-Efficient Fine-Tuning for Foundation ModelsCode2
SoRA: Singular Value Decomposed Low-Rank Adaptation for Domain Generalizable Representation LearningCode2
Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative TradingCode2
LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image RestorationCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language ModelsCode2
Task-Specific Directions: Definition, Exploration, and Utilization in Parameter Efficient Fine-TuningCode2
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?Code2
See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of DecompositionCode2
FairMedFM: Fairness Benchmarking for Medical Imaging Foundation ModelsCode2
Low-Rank Quantization-Aware Training for LLMsCode2
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuningCode2
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of ParametersCode2
Memory-Space Visual Prompting for Efficient Vision-Language Fine-TuningCode2
Parameter-Efficient Fine-Tuning with Discrete Fourier TransformCode2
MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D PriorsCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Any2Point: Empowering Any-modality Large Models for Efficient 3D UnderstandingCode2
InfLoRA: Interference-Free Low-Rank Adaptation for Continual LearningCode2
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
Show:102550
← PrevPage 1 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified