SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 601650 of 935 papers

TitleStatusHype
Cross-Lingual Speech Emotion Recognition: Humans vs. Self-Supervised ModelsCode0
Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting0
Enabling Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines0
Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape0
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning ParadigmCode0
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation0
Beyond LoRA: Exploring Efficient Fine-Tuning Techniques for Time Series Foundational Models0
LPT++: Efficient Training on Mixture of Long-tailed Experts0
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language ModelsCode0
From Text to Emoji: How PEFT-Driven Personality Manipulation Unleashes the Emoji Potential in LLMs0
COMFORT: A Continual Fine-Tuning Framework for Foundation Models Targeted at Consumer Healthcare0
Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights0
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using Singular Values0
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation0
Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA0
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs0
User-Specific Dialogue Generation with User Profile-Aware Pre-Training Model and Parameter-Efficient Fine-Tuning0
A Novel Hybrid Parameter-Efficient Fine-Tuning Approach for Hippocampus Segmentation and Alzheimer's Disease Diagnosis0
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization0
FedMCP: Parameter-Efficient Federated Learning with Model-Contrastive Personalization0
GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMsCode0
Pre-training Everywhere: Parameter-Efficient Fine-Tuning for Medical Image Analysis via Target Parameter Pre-training0
StyleSpeech: Parameter-efficient Fine Tuning for Pre-trained Controllable Text-to-SpeechCode0
Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language ModelsCode0
Question answering system of bridge design specification based on large language modelCode0
SAN: Hypothesizing Long-Term Synaptic Development and Neural Engram Mechanism in Scalable Model's Parameter-Efficient Fine-TuningCode0
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings0
Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks0
Towards Inducing Document-Level Abilities in Standard Multilingual Neural Machine Translation Models0
Pluto and Charon: A Time and Memory Efficient Collaborative Edge AI Framework for Personal LLMs Fine-Tuning0
TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and CompetitionCode0
NoRA: Nested Low-Rank Adaptation for Efficient Fine-Tuning Large Models0
Combo: Co-speech holistic 3D human motion generation and efficient customizable adaptation in harmony0
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair0
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models0
Adaptive Layer Selection for Efficient Vision Transformer Fine-Tuning0
A New Chinese Landscape Paintings Generation Model based on Stable Diffusion using DreamBooth0
KIND: Knowledge Integration and Diversion for Training Decomposable ModelsCode0
LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image0
Orchid2024: A cultivar-level dataset and methodology for fine-grained classification of Chinese Cymbidium OrchidsCode0
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language ModelsCode0
From Words to Worth: Newborn Article Impact Prediction with LLM0
Leveraging Parameter Efficient Training Methods for Low Resource Text Classification: A Case Study in Marathi0
SARA: Singular-Value Based Adaptive Low-Rank Adaption0
FastEdit: Fast Text-Guided Single-Image Editing via Semantic-Aware Diffusion Fine-Tuning0
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts0
Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMs0
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment0
ELP-Adapters: Parameter Efficient Adapter Tuning for Various Speech Processing Tasks0
Parameter-Efficient Fine-Tuning via Circular Convolution0
Show:102550
← PrevPage 13 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified