SOTAVerified

Self-regulating Prompts: Foundational Model Adaptation without Forgetting

2023-07-13ICCV 2023Code Available2· sign in to hype

Muhammad Uzair Khattak, Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101PromptSRCHarmonic mean96.02Unverified
DTDPromptSRCHarmonic mean71.75Unverified
EuroSATPromptSRCHarmonic mean82.32Unverified
FGVC-AircraftPromptSRCHarmonic mean40.15Unverified
Food-101PromptSRCHarmonic mean91.1Unverified
ImageNetPromptSRCHarmonic mean74.01Unverified
ImageNet-APromptSRCTop-1 accuracy %50.9Unverified
ImageNet-RPromptSRCTop-1 accuracy %77.8Unverified
ImageNet-SPromptSRCTop-1 accuracy %49.55Unverified
ImageNet V2PromptSRCTop-1 accuracy %64.35Unverified
Oxford 102 FlowerPromptSRCHarmonic mean85.95Unverified
Oxford-IIIT Pet DatasetPromptSRCHarmonic mean96.3Unverified
Stanford CarsPromptSRCHarmonic mean76.58Unverified
SUN397PromptSRCHarmonic mean80.52Unverified
UCF101PromptSRCHarmonic mean82.74Unverified

Reproductions