Assessing GPT4-V on Structured Reasoning Tasks
Mukul Singh, José Cambronero, Sumit Gulwani, Vu Le, Gust Verbruggen
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Multi-modality promises to unlock further uses for large language models. Recently, the state-of-the-art language model GPT-4 was enhanced with vision capabilities. We carry out a prompting evaluation of GPT-4V and five other baselines on structured reasoning tasks, such as mathematical reasoning, visual data analysis, and code generation. We show that visual Chain-of-Thought, an extension of Chain-of-Thought to multi-modal LLMs, yields significant improvements over the vanilla model. We also present a categorized analysis of scenarios where these models perform well and where they struggle, highlighting challenges associated with coherent multimodal reasoning.