SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 601650 of 935 papers

TitleStatusHype
PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models0
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning0
PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model0
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization0
Permissioned LLMs: Enforcing Access Control in Large Language Models0
Personalized Federated Fine-tuning for Heterogeneous Data: An Automatic Rank Learning Approach via Two-Level LoRA0
Personalized Text Generation with Contrastive Activation Steering0
PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition0
PETapter: Leveraging PET-style classification heads for modular few-shot parameter-efficient fine-tuning0
pFedMxF: Personalized Federated Class-Incremental Learning with Mixture of Frequency Aggregation0
Pluto and Charon: A Time and Memory Efficient Collaborative Edge AI Framework for Personal LLMs Fine-Tuning0
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks0
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches0
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs0
Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention0
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models0
Pre-Trained Vision-Language Models as Partial Annotators0
Pre-training Everywhere: Parameter-Efficient Fine-Tuning for Medical Image Analysis via Target Parameter Pre-training0
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation0
Understanding and Improving Transfer Learning of Deep Models via Neural Collapse0
Privacy Preserving Conversion Modeling in Data Clean Room0
Probing the Efficacy of Federated Parameter-Efficient Fine-Tuning of Vision Transformers for Medical Image Classification0
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning0
Progtuning: Progressive Fine-tuning Framework for Transformer-based Language Models0
Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness0
Promoting Data and Model Privacy in Federated Learning through Quantized LoRA0
Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning0
Prompt-Efficient Fine-Tuning for GPT-like Deep Models to Reduce Hallucination and to Improve Reproducibility in Scientific Text Generation Using Stochastic Optimisation Techniques0
Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models0
Prompt-Tuning SAM: From Generalist to Specialist with only 2048 Parameters and 16 Training Images0
PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation0
Pushing Large Language Models to the 6G Edge: Vision, Challenges, and Opportunities0
QERA: an Analytical Framework for Quantization Error Reconstruction0
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources0
Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text Reranking with Large Language Models0
Quantified Task Misalignment to Inform PEFT: An Exploration of Domain Generalization and Catastrophic Forgetting in CLIP0
Quantum-Enhanced LLM Efficient Fine Tuning0
QueEn: A Large Language Model for Quechua-English Translation0
Query-driven Relevant Paragraph Extraction from Legal Judgments0
R^3Mem: Bridging Memory Retention and Retrieval via Reversible Compression0
RandLoRA: Full-rank parameter-efficient fine-tuning of large models0
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation0
RAP: Efficient Text-Video Retrieval with Sparse-and-Correlated Adapter0
Towards Efficient Vision-Language Tuning: More Information Density, More Generalizability0
Representation Discrepancy Bridging Method for Remote Sensing Image-Text Retrieval0
ResLoRA: Identity Residual Mapping in Low-Rank Adaption0
Resource Allocation for Stable LLM Training in Mobile Edge Computing0
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning Large Language Models0
Show:102550
← PrevPage 13 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified