SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 125 of 935 papers

TitleStatusHype
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Progtuning: Progressive Fine-tuning Framework for Transformer-based Language Models0
Optimising Language Models for Downstream Tasks: A Post-Training Perspective0
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language ModelsCode0
WordCon: Word-level Typography Control in Scene Text Rendering0
Exploring Adapter Design Tradeoffs for Low Resource Music Generation0
ARD-LoRA: Dynamic Rank Allocation for Parameter-Efficient Fine-Tuning of Foundation Models with Heterogeneous Adaptation Needs0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
Sharp Generalization Bounds for Foundation Models with Asymmetric Randomized Low-Rank Adapters0
Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention0
Text to Image for Multi-Label Image Recognition with Joint Prompt-Adapter Learning0
FedVLMBench: Benchmarking Federated Fine-Tuning of Vision-Language Models0
MetaTT: A Global Tensor-Train Adapter for Parameter-Efficient Fine-Tuning0
FLoRIST: Singular Value Thresholding for Efficient and Accurate Federated Fine-Tuning of Large Language Models0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning0
InstantFT: An FPGA-Based Runtime Subsecond Fine-tuning of CNN Models0
Leveraging Coordinate Momentum in SignSGD and Muon: Memory-Optimized Zero-OrderCode0
Gradient Inversion Attacks on Parameter-Efficient Fine-TuningCode0
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences0
WeightLoRA: Keep Only Necessary Adapters0
Parameter Efficient Fine Tuning Llama 3.1 for Answering Arabic Legal Questions: A Case Study on Jordanian LawsCode0
Show:102550
← PrevPage 1 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified