SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 401450 of 935 papers

TitleStatusHype
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification0
Efficient In-Domain Question Answering for Resource-Constrained Environments0
Weak-to-Strong Backdoor Attack for Large Language Models0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in MammographyCode0
Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting0
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularizationCode1
Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation0
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual RecognitionCode1
Enabling Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines0
Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape0
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning ParadigmCode0
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation0
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
Propulsion: Steering LLM with Tiny Fine-TuningCode1
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
Beyond LoRA: Exploring Efficient Fine-Tuning Techniques for Time Series Foundational Models0
LPT++: Efficient Training on Mixture of Long-tailed Experts0
From Text to Emoji: How PEFT-Driven Personality Manipulation Unleashes the Emoji Potential in LLMs0
Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language ModelsCode2
COMFORT: A Continual Fine-Tuning Framework for Foundation Models Targeted at Consumer Healthcare0
Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights0
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA RegionCode1
Sam2Rad: A Segmentation Model for Medical Images with Learnable PromptsCode1
Ferret: Federated Full-Parameter Tuning at Scale for Large Language ModelsCode1
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using Singular Values0
Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA0
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation0
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs0
User-Specific Dialogue Generation with User Profile-Aware Pre-Training Model and Parameter-Efficient Fine-Tuning0
A Novel Hybrid Parameter-Efficient Fine-Tuning Approach for Hippocampus Segmentation and Alzheimer's Disease Diagnosis0
Task-Specific Directions: Definition, Exploration, and Utilization in Parameter Efficient Fine-TuningCode2
MoRe Fine-Tuning with 10x Fewer ParametersCode1
FedMCP: Parameter-Efficient Federated Learning with Model-Contrastive Personalization0
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization0
Pre-training Everywhere: Parameter-Efficient Fine-Tuning for Medical Image Analysis via Target Parameter Pre-training0
GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMsCode0
CVPT: Cross-Attention help Visual Prompt Tuning adapt visual taskCode1
StyleSpeech: Parameter-efficient Fine Tuning for Pre-trained Controllable Text-to-SpeechCode0
Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language ModelsCode0
Question answering system of bridge design specification based on large language modelCode0
SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-TuningCode0
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings0
SORSA: Singular Values and Orthonormal Regularized Singular Vectors Adaptation of Large Language ModelsCode1
Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks0
Towards Inducing Document-Level Abilities in Standard Multilingual Neural Machine Translation Models0
Positional Prompt Tuning for Efficient 3D Representation LearningCode1
TDS-CLIP: Temporal Difference Side Network for Image-to-Video Transfer LearningCode1
Show:102550
← PrevPage 9 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified