SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 401425 of 935 papers

TitleStatusHype
Is Multiple Object Tracking a Matter of Specialization?0
Is your LLM trapped in a Mental Set? Investigative study on how mental sets affect the reasoning capabilities of LLMs0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.20
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning0
HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models0
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping0
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning0
HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation0
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization0
HD-PiSSA: High-Rank Distributed Orthogonal Adaptation0
LayerNorm: A key component in parameter-efficient fine-tuning0
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning0
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
A Text-Based Knowledge-Embedded Soft Sensing Modeling Approach for General Industrial Process Tasks Based on Large Language Model0
Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment0
Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks0
LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning0
LoRTA: Low Rank Tensor Adaptation of Large Language Models0
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models0
Show:102550
← PrevPage 17 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified