SOTAVerified

MaPLe: Multi-modal Prompt Learning

2022-10-06CVPR 2023Code Available1· sign in to hype

Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101MaPLeHarmonic mean96.02Unverified
DTDMaPLeHarmonic mean68.16Unverified
EuroSATMaPLeHarmonic mean82.35Unverified
FGVC-AircraftMaPLeHarmonic mean36.5Unverified
Food-101MaPLeHarmonic mean91.38Unverified
ImageNetMaPLeHarmonic mean73.47Unverified
ImageNet-AMaPLeTop-1 accuracy %50.9Unverified
ImageNet-RMaPLeTop-1 accuracy %76.98Unverified
ImageNet-SMaPLeTop-1 accuracy %49.15Unverified
ImageNet V2MaPLeTop-1 accuracy %64.07Unverified
Oxford 102 FlowerMaPLeHarmonic mean82.56Unverified
Oxford-IIIT Pet DatasetMaPLeHarmonic mean96.58Unverified
Stanford CarsMaPLeHarmonic mean73.47Unverified
SUN397MaPLeHarmonic mean79.75Unverified
UCF101MaPLeHarmonic mean80.82Unverified

Reproductions