SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 276300 of 935 papers

TitleStatusHype
FedVLMBench: Benchmarking Federated Fine-Tuning of Vision-Language Models0
FLoRIST: Singular Value Thresholding for Efficient and Accurate Federated Fine-Tuning of Large Language Models0
MetaTT: A Global Tensor-Train Adapter for Parameter-Efficient Fine-Tuning0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning0
InstantFT: An FPGA-Based Runtime Subsecond Fine-tuning of CNN Models0
Gradient Inversion Attacks on Parameter-Efficient Fine-TuningCode0
Leveraging Coordinate Momentum in SignSGD and Muon: Memory-Optimized Zero-OrderCode0
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences0
WeightLoRA: Keep Only Necessary Adapters0
Parameter Efficient Fine Tuning Llama 3.1 for Answering Arabic Legal Questions: A Case Study on Jordanian LawsCode0
Uni-LoRA: One Vector is All You Need0
Assortment of Attention Heads: Accelerating Federated PEFT with Head Pruning and Strategic Client Selection0
LoRA as a Flexible Framework for Securing Large Vision Systems0
On Fairness of Task Arithmetic: The Role of Task Vectors0
Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You NeedCode0
Beyond Zero Initialization: Investigating the Impact of Non-Zero Initialization on LoRA Fine-Tuning DynamicsCode0
SC-LoRA: Balancing Efficient Fine-tuning and Knowledge Preservation via Subspace-Constrained LoRA0
Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert0
Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models0
MAP: Revisiting Weight Decomposition for Low-Rank Adaptation0
Weight Spectra Induced Efficient Model Adaptation0
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning0
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective0
Permissioned LLMs: Enforcing Access Control in Large Language Models0
Show:102550
← PrevPage 12 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified