SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 876900 of 935 papers

TitleStatusHype
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language ModelsCode0
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs0
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models0
MasakhaNEWS: News Topic Classification for African languagesCode1
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNsCode1
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-TuningCode1
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning0
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ SegmentationCode1
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-TuningCode0
RCA: Region Conditioned Adaptation for Visual Abductive ReasoningCode0
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-TuningCode1
Sensitivity-Aware Visual Parameter-Efficient Fine-TuningCode1
Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language ModelsCode1
Task-Specific Skill Localization in Fine-tuned Language ModelsCode1
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-TuningCode1
Parameter-Efficient Fine-Tuning Design Spaces0
Adapting Shortcut With Normalizing Flow: An Efficient Tuning Framework for Visual RecognitionCode0
Understanding and Improving Transfer Learning of Deep Models via Neural Collapse0
SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning0
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation0
Towards Practical Plug-and-Play Diffusion ModelsCode1
Parameter-Efficient Finetuning of Transformers for Source CodeCode0
Visual Query Tuning: Towards Effective Usage of Intermediate Representations for Parameter and Memory Efficient Transfer LearningCode1
Show:102550
← PrevPage 36 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified