SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 751775 of 935 papers

TitleStatusHype
LORENZA: Enhancing Generalization in Low-Rank Gradient LLM Training via Efficient Zeroth-Order Adaptive SAM0
LoRTA: Low Rank Tensor Adaptation of Large Language Models0
LoTR: Low Tensor Rank Weight Adaptation0
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits0
Low-Rank Adaptation of Neural Fields0
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression0
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning0
LPT++: Efficient Training on Mixture of Long-tailed Experts0
LSR-Adapt: Ultra-Efficient Parameter Tuning with Matrix Low Separation Rank Kernel Adaptation0
MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning0
Mamba State-Space Models Are Lyapunov-Stable Learners0
MAP: Revisiting Weight Decomposition for Low-Rank Adaptation0
Adapters Mixup: Mixing Parameter-Efficient Adapters to Enhance the Adversarial Robustness of Fine-tuned Pre-trained Text Classifiers0
MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts0
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences0
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning0
MedFocusCLIP : Improving few shot classification in medical datasets using pixel wise attention0
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization0
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair0
Meta-Adapter: Parameter Efficient Few-Shot Learning through Meta-Learning0
Meta-Learning Adaptable Foundation Models0
Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning0
MetaLoRA: Tensor-Enhanced Adaptive Low-Rank Fine-tuning0
MetaTT: A Global Tensor-Train Adapter for Parameter-Efficient Fine-Tuning0
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning0
Show:102550
← PrevPage 31 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified