SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 901935 of 935 papers

TitleStatusHype
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM0
Text-guided High-definition Consistency Texture Model0
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language ModelsCode0
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based AlignmentCode0
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs0
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models0
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning0
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-TuningCode0
RCA: Region Conditioned Adaptation for Visual Abductive ReasoningCode0
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
Parameter-Efficient Fine-Tuning Design Spaces0
Adapting Shortcut With Normalizing Flow: An Efficient Tuning Framework for Visual RecognitionCode0
Understanding and Improving Transfer Learning of Deep Models via Neural Collapse0
SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning0
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation0
Parameter-Efficient Finetuning of Transformers for Source CodeCode0
Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning0
HyperTuning: Toward Adapting Large Language Models without Back-propagation0
Multi-Head Adapter Routing for Cross-Task Generalization0
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion0
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers0
Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models0
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models0
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks0
Exploring Parameter-Efficient Fine-Tuning to Enable Foundation Models in Federated Learning0
Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-LearningCode0
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning0
When does Parameter-Efficient Transfer Learning Work for Machine Translation?Code0
CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment0
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks0
Meta-Adapter: Parameter Efficient Few-Shot Learning through Meta-Learning0
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language ModelsCode0
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models0
Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding0
Show:102550
← PrevPage 19 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified