SOTAVerified

DePT: Decoupled Prompt Tuning

2023-09-14CVPR 2024Code Available1· sign in to hype

Ji Zhang, Shihan Wu, Lianli Gao, Heng Tao Shen, Jingkuan Song

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the tuned model generalizes to the base (or target) task, the worse it generalizes to new tasks, and vice versa. Specifically, through an in-depth analysis of the learned features of the base and new tasks, we observe that the BNT stems from a channel bias issue, i.e., the vast majority of feature channels are occupied by base-specific knowledge, resulting in the collapse of taskshared knowledge important to new tasks. To address this, we propose the Decoupled Prompt Tuning (DePT) framework, which decouples base-specific knowledge from feature channels into an isolated feature space during prompt tuning, so as to maximally preserve task-shared knowledge in the original feature space for achieving better zero-shot generalization on new tasks. Importantly, our DePT is orthogonal to existing prompt tuning methods, hence it can improve all of them. Extensive experiments on 11 datasets show the strong flexibility and effectiveness of DePT. Our code and pretrained models are available at https://github.com/Koorye/DePT.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101DePTHarmonic mean96.28Unverified
DTDDePTHarmonic mean71.09Unverified
EuroSATDePTHarmonic mean84.88Unverified
FGVC-AircraftDePTHarmonic mean40.73Unverified
Food-101DePTHarmonic mean91.22Unverified
ImageNetDePTHarmonic mean74.02Unverified
Oxford 102 FlowerDePTHarmonic mean86.46Unverified
Oxford-IIIT Pet DatasetDePTHarmonic mean96.37Unverified
Stanford CarsDePTHarmonic mean77.79Unverified
SUN397DePTHarmonic mean81.06Unverified
UCF101DePTHarmonic mean82.46Unverified

Reproductions