SOTAVerified

Prompting Large Vision-Language Models for Compositional Reasoning

2024-01-20Code Available0· sign in to hype

Timothy Ossowski, Ming Jiang, Junjie Hu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision-language models such as CLIP have shown impressive capabilities in encoding texts and images into aligned embeddings, enabling the retrieval of multimodal data in a shared embedding space. However, these embedding-based models still face challenges in effectively matching images and texts with similar visio-linguistic compositionality, as evidenced by their performance on the recent Winoground dataset. In this paper, we argue that this limitation stems from two factors: the use of single vector representations for complex multimodal data, and the absence of step-by-step reasoning in these embedding-based methods. To address this issue, we make an exploratory step using a novel generative method that prompts large vision-language models (e.g., GPT-4) to depict images and perform compositional reasoning. Our method outperforms other embedding-based methods on the Winoground dataset, and obtains further improvement of up to 10% accuracy when enhanced with the optimal description.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WinogroundKeyComp* (GPT-4)Text Score43.5Unverified
WinogroundKeyComp* (GPT-3.5)Text Score42.7Unverified
WinogroundKeyComp (GPT-3.5)Text Score30.3Unverified

Reproductions