SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 251275 of 935 papers

TitleStatusHype
Cross-Modal Adapter for Text-Video RetrievalCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
Scaling & Shifting Your Features: A New Baseline for Efficient Model TuningCode1
ST-Adapter: Parameter-Efficient Image-to-Video Transfer LearningCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-TuningCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
Towards a Unified View of Parameter-Efficient Transfer LearningCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-modelsCode1
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared HypernetworksCode1
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy0
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and OptimizationCode0
Optimising Language Models for Downstream Tasks: A Post-Training Perspective0
WordCon: Word-level Typography Control in Scene Text Rendering0
Progtuning: Progressive Fine-tuning Framework for Transformer-based Language Models0
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language ModelsCode0
Exploring Adapter Design Tradeoffs for Low Resource Music Generation0
ARD-LoRA: Dynamic Rank Allocation for Parameter-Efficient Fine-Tuning of Foundation Models with Heterogeneous Adaptation Needs0
Memba: Membrane-driven Parameter-Efficient Fine-Tuning for MambaCode0
Sharp Generalization Bounds for Foundation Models with Asymmetric Randomized Low-Rank Adapters0
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
Prefix-Tuning+: Modernizing Prefix-Tuning by Decoupling the Prefix from Attention0
Text to Image for Multi-Label Image Recognition with Joint Prompt-Adapter Learning0
Show:102550
← PrevPage 11 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified