Visual Prompt Tuning
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/KMnP/vptOfficialIn paperpytorch★ 1,214
- github.com/heekhero/DTLpytorch★ 130
- github.com/Yiming-M/CLIP-EBCpytorch★ 92
- github.com/TooTouch/VPTpytorch★ 17
- github.com/wgcban/aptpytorch★ 16
- github.com/unites-lab/vpnspytorch★ 12
Abstract
The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CIFAR-100-LT (ρ=10) | VPT | Error Rate | 10.4 | — | Unverified |
| CIFAR-100-LT (ρ=100) | VPT | Error Rate | 19 | — | Unverified |
| CIFAR-100-LT (ρ=50) | VPT | Error Rate | 15.2 | — | Unverified |