SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 651700 of 935 papers

TitleStatusHype
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models0
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information0
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications0
Improving LoRA in Privacy-preserving Federated Learning0
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT AdaptationCode2
Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model0
Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to RCode0
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank BottlenecksCode0
PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task AdaptationCode1
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated ExpertsCode1
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model0
Targeted Efficient Fine-tuning: Optimizing Parameter Updates with Data-Driven Sample Selection0
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation0
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning0
Tracking Meets LoRA: Faster Training, Larger Model, Stronger PerformanceCode2
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language ModelsCode0
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language ModelsCode0
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
ResLoRA: Identity Residual Mapping in Low-Rank Adaption0
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-TuningCode1
Inducing Generalization across Languages and Tasks using Featurized Low-Rank Mixtures0
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models0
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
MIP: CLIP-based Image Reconstruction from PEFT Gradients0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization0
Multimodal Instruction Tuning with Conditional Mixture of LoRACode1
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning0
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?0
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning0
Advancing Parameter Efficiency in Fine-tuning via Representation EditingCode1
Two-stage Cytopathological Image Synthesis for Augmenting Cervical Abnormality Screening0
KInIT at SemEval-2024 Task 8: Fine-tuned LLMs for Multilingual Machine-Generated Text DetectionCode1
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning0
NOTE: Notable generation Of patient Text summaries through Efficient approach based on direct preference optimization0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic ForgettingCode0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources0
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language ModelsCode1
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural NetworkCode0
TuneTables: Context Optimization for Scalable Prior-Data Fitted NetworksCode1
Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning0
Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks0
UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Language ModelsCode1
Quantified Task Misalignment to Inform PEFT: An Exploration of Domain Generalization and Catastrophic Forgetting in CLIP0
DoRA: Weight-Decomposed Low-Rank AdaptationCode4
An Embarrassingly Simple Approach for LLM with Strong ASR CapacityCode2
Show:102550
← PrevPage 14 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified