SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 201250 of 935 papers

TitleStatusHype
MoLoRec: A Generalizable and Efficient Framework for LLM-Based Recommendation0
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits0
Music for All: Representational Bias and Cross-Cultural Adaptability of Music Generation ModelsCode0
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters0
Model Diffusion for Certifiable Few-shot Transfer Learning0
SSMLoRA: Enhancing Low-Rank Adaptation with State Space ModelCode1
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization0
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning0
FedP^2EFT: Federated Learning to Personalize Parameter Efficient Fine-Tuning for Multilingual LLMs0
Bilevel ZOFO: Bridging Parameter-Efficient and Zeroth-Order Techniques for Efficient LLM Fine-Tuning and Meta-Training0
RandLoRA: Full-rank parameter-efficient fine-tuning of large models0
Joint Localization and Activation Editing for Low-Resource Fine-TuningCode1
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA0
Parameter Efficient Fine-Tuning of Segment Anything ModelCode1
Norm-Bounded Low-Rank Adaptation0
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.20
LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation0
LoRAGuard: An Effective Black-box Watermarking Approach for LoRAs0
Fine Tuning without Catastrophic Forgetting via Selective Low Rank Adaptation0
Decentralized Low-Rank Fine-Tuning of Large Language Models0
Speech Translation Refinement using Large Language ModelsCode0
Complementary Subspace Low-Rank Adaptation of Vision-Language Models for Few-Shot Classification0
Domain Expansion: Parameter-Efficient Modules as Building Blocks for Composite DomainsCode0
Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models0
Low-Rank Adapters Meet Neural Architecture Search for LLM CompressionCode0
Parameter-Efficient Fine-Tuning for Foundation ModelsCode2
Is your LLM trapped in a Mental Set? Investigative study on how mental sets affect the reasoning capabilities of LLMs0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning0
LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning0
Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models0
TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for Parameter-Efficient Fine-Tuning0
Optimizing Language Models for Grammatical Acceptability: A Comparative Study of Fine-Tuning Techniques0
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language Models0
A Hessian-informed hyperparameter optimization for differential learning rate0
Speech Recognition for Automatically Assessing Afrikaans and isiXhosa Preschool Oral Narratives0
How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language AdaptersCode0
A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model0
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning0
Spectral-Aware Low-Rank Adaptation for Speaker VerificationCode0
MedFocusCLIP : Improving few shot classification in medical datasets using pixel wise attention0
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode0
Efficient Deployment of Large Language Models on Resource-constrained Devices0
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
tCURLoRA: Tensor CUR Decomposition Based Low-Rank Parameter Adaptation and Its Application in Medical Image Segmentation0
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation0
Rethinking Token Reduction with Parameter-Efficient Fine-Tuning in ViT for Pixel-Level TasksCode0
F^3OCUS - Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning0
Show:102550
← PrevPage 5 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified