SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 551575 of 935 papers

TitleStatusHype
SVFT: Parameter-Efficient Fine-Tuning with Singular VectorsCode1
RAP: Efficient Text-Video Retrieval with Sparse-and-Correlated Adapter0
MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-TuningCode1
Domain-Inspired Sharpness-Aware Minimization Under Domain ShiftsCode0
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter SelectionCode0
Parameter-efficient Fine-tuning in Hyperspherical Space for Open-vocabulary Semantic Segmentation0
Low-Rank Few-Shot Adaptation of Vision-Language ModelsCode3
IAPT: Instruction-Aware Prompt Tuning for Large Language Models0
Sparsity- and Hybridity-Inspired Visual Parameter-Efficient Fine-Tuning for Medical Diagnosis0
Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning0
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of ParametersCode2
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank DistributionCode0
Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language ModelsCode1
Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation0
Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning0
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language ModelsCode1
PatchProt: Hydrophobic patch prediction using protein foundation modelsCode0
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt AdaptationCode0
Sparse Matrix in Large Language Model Fine-tuningCode1
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector BanksCode1
BiSup: Bidirectional Quantization Error Suppression for Large Language Models0
Pre-Trained Vision-Language Models as Partial Annotators0
FLoRA: Low-Rank Core Space for N-dimensionCode1
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and InferenceCode1
Spectral Adapter: Fine-Tuning in Spectral SpaceCode1
Show:102550
← PrevPage 23 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified