SOTAVerified

PromptKD: Unsupervised Prompt Distillation for Vision-Language Models

2024-03-05CVPR 2024Code Available3· sign in to hype

Zheng Li, Xiang Li, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prompt learning has emerged as a valuable technique in enhancing vision-language models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly focuses on designing various learning forms of prompts, neglecting the potential of prompts as effective distillers for learning from larger teacher models. In this paper, we introduce an unsupervised domain prompt distillation framework, which aims to transfer the knowledge of a larger teacher model to a lightweight target model through prompt-driven imitation using unlabeled domain images. Specifically, our framework consists of two distinct stages. In the initial stage, we pre-train a large CLIP teacher model using domain (few-shot) labels. After pre-training, we leverage the unique decoupled-modality characteristics of CLIP by pre-computing and storing the text features as class vectors only once through the teacher text encoder. In the subsequent stage, the stored class vectors are shared across teacher and student image encoders for calculating the predicted logits. Further, we align the logits of both the teacher and student models via KL divergence, encouraging the student image encoder to generate similar probability distributions to the teacher through the learnable prompts. The proposed prompt distillation process eliminates the reliance on labeled data, enabling the algorithm to leverage a vast amount of unlabeled images within the domain. Finally, the well-trained student image encoders and pre-stored text features (class vectors) are utilized for inference. To our best knowledge, we are the first to (1) perform unsupervised domain-specific prompt-driven knowledge distillation for CLIP, and (2) establish a practical pre-storing mechanism of text features as shared class vectors between teacher and student. Extensive experiments on 11 datasets demonstrate the effectiveness of our method.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101PromptKDHarmonic mean97.77Unverified
DTDPromptKDHarmonic mean77.94Unverified
EuroSATPromptKDHarmonic mean89.14Unverified
FGVC-AircraftPromptKDHarmonic mean45.17Unverified
Food-101PromptKDHarmonic mean93.05Unverified
ImageNetPromptKDHarmonic mean77.62Unverified
Oxford 102 FlowerPromptKDHarmonic mean90.24Unverified
Oxford-IIIT Pet DatasetPromptKDHarmonic mean97.15Unverified
Stanford CarsPromptKDHarmonic mean83.13Unverified
SUN397PromptKDHarmonic mean82.6Unverified
UCF101PromptKDHarmonic mean86.1Unverified

Reproductions