SOTAVerified

Consistency-guided Prompt Learning for Vision-Language Models

2023-06-01Code Available1· sign in to hype

Shuvendu Roy, Ali Etemad

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models. Our approach improves the generalization of large foundation models when fine-tuned on downstream tasks in a few-shot setting. The basic idea of CoPrompt is to enforce a consistency constraint in the prediction of the trainable and pre-trained models to prevent overfitting on the downstream task. Additionally, we introduce the following two components into our consistency constraint to further boost the performance: enforcing consistency on two perturbed inputs and combining two dominant paradigms of tuning, prompting and adapter. Enforcing consistency on perturbed input serves to further regularize the consistency constraint, thereby improving generalization. Moreover, the integration of adapters and prompts not only enhances performance on downstream tasks but also offers increased tuning flexibility in both input and output spaces. This facilitates more effective adaptation to downstream tasks in a few-shot learning setting. Experiments show that CoPrompt outperforms existing methods on a range of evaluation suites, including base-to-novel generalization, domain generalization, and cross-dataset evaluation. On generalization, CoPrompt improves the state-of-the-art on zero-shot tasks and the overall harmonic mean over 11 datasets. Detailed ablation studies show the effectiveness of each of the components in CoPrompt. We make our code available at https://github.com/ShuvenduRoy/CoPrompt.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101CoPromptHarmonic mean96.55Unverified
DTDCoPromptHarmonic mean72.79Unverified
EuroSATCoPromptHarmonic mean85.84Unverified
FGVC-AircraftCoPromptHarmonic mean39.76Unverified
Food-101CoPromptHarmonic mean91.4Unverified
ImageNetCoPromptHarmonic mean74.33Unverified
ImageNet-ACoPromptTop-1 accuracy %50.5Unverified
ImageNet-RCoPromptTop-1 accuracy %77.51Unverified
ImageNet-SCoPromptTop-1 accuracy %49.43Unverified
Oxford 102 FlowerCoPromptHarmonic mean85.71Unverified
Oxford-IIIT Pet DatasetCoPromptHarmonic mean96.87Unverified
Stanford CarsCoPromptHarmonic mean75.66Unverified
SUN397CoPromptHarmonic mean81.31Unverified
UCF101CoPromptHarmonic mean83.07Unverified

Reproductions