SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 301350 of 935 papers

TitleStatusHype
Parameter-Efficient Fine-Tuning without Introducing New LatencyCode0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource SettingsCode0
PEFT-U: Parameter-Efficient Fine-Tuning for User PersonalizationCode0
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language ModelsCode0
DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing MechanismCode0
Parameter-Efficient Fine-Tuning of Vision Foundation Model for Forest Floor Segmentation from UAV ImageryCode0
DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for Medical Image AnalysisCode0
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language ModelsCode0
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsCode0
Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning DynamicsCode0
Benchmarking Pathology Foundation Models: Adaptation Strategies and ScenariosCode0
Parameter Efficient Fine Tuning Llama 3.1 for Answering Arabic Legal Questions: A Case Study on Jordanian LawsCode0
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank DistributionCode0
Domain-Inspired Sharpness-Aware Minimization Under Domain ShiftsCode0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic ForgettingCode0
Domain Expansion: Parameter-Efficient Modules as Building Blocks for Composite DomainsCode0
Parameter-Efficient Finetuning of Transformers for Source CodeCode0
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt AdaptationCode0
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language ModelsCode0
Orchid2024: A cultivar-level dataset and methodology for fine-grained classification of Chinese Cymbidium OrchidsCode0
DLP: Dynamic Layerwise Pruning in Large Language ModelsCode0
SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-TuningCode0
NLoRA: Nyström-Initiated Low-Rank Adaptation for Large Language ModelsCode0
On-Device LLM for Context-Aware Wi-Fi RoamingCode0
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning ParadigmCode0
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language ModelsCode0
Music for All: Representational Bias and Cross-Cultural Adaptability of Music Generation ModelsCode0
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in MammographyCode0
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based AlignmentCode0
Deepfakes on Demand: the rise of accessible non-consensual deepfake image generatorsCode0
Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation ModelsCode0
MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete DiffusionCode0
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-TuningCode0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
Gradient Weight-normalized Low-rank Projection for Efficient LLM TrainingCode0
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
Gradient Inversion Attacks on Parameter-Efficient Fine-TuningCode0
15,500 Seconds: Lean UAV Classification Leveraging PEFT and Pre-Trained NetworksCode0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural NetworkCode0
DARE the Extreme: Revisiting Delta-Parameter Pruning For Fine-Tuned ModelsCode0
GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMsCode0
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode0
ASteISR: Adapting Single Image Super-resolution Pre-trained Model for Efficient Stereo Image Super-resolutionCode0
Harnessing the Power of Large Language Model for Uncertainty Aware Graph ProcessingCode0
CustomTTT: Motion and Appearance Customized Video Generation via Test-Time TrainingCode0
Show:102550
← PrevPage 7 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified