SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 751800 of 935 papers

TitleStatusHype
LORENZA: Enhancing Generalization in Low-Rank Gradient LLM Training via Efficient Zeroth-Order Adaptive SAM0
LoRTA: Low Rank Tensor Adaptation of Large Language Models0
LoTR: Low Tensor Rank Weight Adaptation0
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits0
Low-Rank Adaptation of Neural Fields0
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression0
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning0
LPT++: Efficient Training on Mixture of Long-tailed Experts0
LSR-Adapt: Ultra-Efficient Parameter Tuning with Matrix Low Separation Rank Kernel Adaptation0
MALoRA: Mixture of Asymmetric Low-Rank Adaptation for Enhanced Multi-Task Learning0
Mamba State-Space Models Are Lyapunov-Stable Learners0
MAP: Revisiting Weight Decomposition for Low-Rank Adaptation0
Adapters Mixup: Mixing Parameter-Efficient Adapters to Enhance the Adversarial Robustness of Fine-tuned Pre-trained Text Classifiers0
MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts0
Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences0
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning0
MedFocusCLIP : Improving few shot classification in medical datasets using pixel wise attention0
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization0
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair0
Meta-Adapter: Parameter Efficient Few-Shot Learning through Meta-Learning0
Meta-Learning Adaptable Foundation Models0
Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning0
MetaLoRA: Tensor-Enhanced Adaptive Low-Rank Fine-tuning0
MetaTT: A Global Tensor-Train Adapter for Parameter-Efficient Fine-Tuning0
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning0
MIP: CLIP-based Image Reconstruction from PEFT Gradients0
MIRA: A Method of Federated MultI-Task Learning for LaRge LAnguage Models0
Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models0
Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning0
Mitigating Visual Knowledge Forgetting in MLLM Instruction-tuning via Modality-decoupled Gradient Descent0
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models0
Mixture of Physical Priors Adapter for Parameter-Efficient Fine-Tuning0
Mixture of Routers0
Model Diffusion for Certifiable Few-shot Transfer Learning0
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models0
MoFE: Mixture of Frozen Experts Architecture0
MoLoRec: A Generalizable and Efficient Framework for LLM-Based Recommendation0
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model0
Multi-Head Adapter Routing for Cross-Task Generalization0
MA-FSAR: Multimodal Adaptation of CLIP for Few-Shot Action Recognition0
Inducing Generalization across Languages and Tasks using Featurized Low-Rank Mixtures0
Navigating Uncertainty: Optimizing API Dependency for Hallucination Reduction in Closed-Book Question Answering0
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models0
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models0
NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation0
Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert0
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling0
NoRA: Nested Low-Rank Adaptation for Efficient Fine-Tuning Large Models0
Norm-Bounded Low-Rank Adaptation0
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models0
Show:102550
← PrevPage 16 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified