SOTAVerified

Generalizable Prompt Learning of CLIP: A Brief Overview

2025-03-03Unverified0· sign in to hype

Fangming Cui, Yonggang Zhang, Xuan Wang, Xule Wang, Liang Xiao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Existing vision-language models (VLMs) such as CLIP have showcased an impressive capability to generalize well across various downstream tasks. These models leverage the synergy between visual and textual information, enabling them to understand and reason about the content present in images and text in a unified manner. This article provides a brief overview of CLIP based on few-shot prompt learning, including experimental data and technical characteristics of some methods. The purpose of this review is to provide a reference for researchers who have just started their research in generalizable prompting of CLIP through few-shot training for classification across 15 datasets and also to facilitate the integration of this field by researchers in other downstream tasks.

Tasks

Reproductions