SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 476500 of 935 papers

TitleStatusHype
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting ModelsCode1
Parameter-Efficient Fine-Tuning via Circular Convolution0
PEFT-U: Parameter-Efficient Fine-Tuning for User PersonalizationCode0
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?Code2
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective0
Zero-Shot Embeddings Inform Learning and Forgetting with Vision-Language Encoders0
Learn to Preserve and Diversify: Parameter-Efficient Group with Orthogonal Regularization for Domain GeneralizationCode0
Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models0
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks0
InstructAV: Instruction Fine-tuning Large Language Models for Authorship VerificationCode0
An efficient framework based on large foundation model for cervical cytopathology whole slide image screeningCode0
LoRA-PT: Low-Rank Adapting UNETR for Hippocampus Segmentation Using Principal Tensor Singular Values and VectorsCode0
SDPT: Synchronous Dual Prompt Tuning for Fusion-based Visual-Language Pre-trained ModelsCode0
Probing the Efficacy of Federated Parameter-Efficient Fine-Tuning of Vision Transformers for Medical Image Classification0
Low-Rank Interconnected Adaptation across LayersCode0
RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation QuantizationCode1
Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction0
ROSA: Random Subspace Adaptation for Efficient Fine-TuningCode0
Reprogramming Distillation for Medical Foundation ModelsCode0
A Survey on LoRA of Large Language ModelsCode3
See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of DecompositionCode2
SBoRA: Low-Rank Adaptation with Regional Weight UpdatesCode0
LoRA-GA: Low-Rank Adaptation with Gradient ApproximationCode3
GPT vs RETRO: Exploring the Intersection of Retrieval and Parameter-Efficient Fine-Tuning0
ASteISR: Adapting Single Image Super-resolution Pre-trained Model for Efficient Stereo Image Super-resolutionCode0
Show:102550
← PrevPage 20 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified