SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 501550 of 935 papers

TitleStatusHype
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts0
Leveraging Parameter Efficient Training Methods for Low Resource Text Classification: A Case Study in Marathi0
SARA: Singular-Value Based Adaptive Low-Rank Adaption0
Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation0
6G WavesFM: A Foundation Model for Sensing, Communication, and Localization0
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis0
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models0
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information0
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers0
Adapter-X: A Novel General Parameter-Efficient Fine-Tuning Framework for Vision0
Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images0
Adaptive Layer Selection for Efficient Vision Transformer Fine-Tuning0
Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices0
Adaptive parameter-efficient fine-tuning via Hessian-informed subset selection0
Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models0
AdaViPro: Region-based Adaptive Visual Prompt for Large-Scale Models Adapting0
A Decade of Wheat Mapping for Lebanon0
AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs0
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models0
A GEN AI Framework for Medical Note Generation0
Ahead-of-Time P-Tuning0
A Hessian-informed hyperparameter optimization for differential learning rate0
Aligner: One Global Token is Worth Millions of Parameters When Aligning Large Language Models0
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models0
A LoRA is Worth a Thousand Pictures0
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models0
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models0
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language Models0
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model0
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
A New Chinese Landscape Paintings Generation Model based on Stable Diffusion using DreamBooth0
An Improved Empirical Fisher Approximation for Natural Gradient Descent0
A Novel Hybrid Parameter-Efficient Fine-Tuning Approach for Hippocampus Segmentation and Alzheimer's Disease Diagnosis0
A Parameter-efficient Language Extension Framework for Multilingual ASR0
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA0
ARD-LoRA: Dynamic Rank Allocation for Parameter-Efficient Fine-Tuning of Foundation Models with Heterogeneous Adaptation Needs0
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices0
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers0
A Split-and-Privatize Framework for Large Language Model Fine-Tuning0
Assessing Translation capabilities of Large Language Models involving English and Indian Languages0
Assortment of Attention Heads: Accelerating Federated PEFT with Head Pruning and Strategic Client Selection0
A Survey of Recent Backdoor Attacks and Defenses in Large Language Models0
A Survey on Efficient Federated Learning Methods for Foundation Model Training0
A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model0
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
AutoPsyC: Automatic Recognition of Psychodynamic Conflicts from Semi-structured Interviews with Large Language Models0
Balancing Stability and Plasticity in Pretrained Detector: A Dual-Path Framework for Incremental Object Detection0
Show:102550
← PrevPage 11 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified