SOTAVerified

VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition in LLMs

2025-03-25Unverified0· sign in to hype

Kelaiti Xiao, Liang Yang, Paerhati Tulajiang, Hongfei Lin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper introduces VisualQuest, a novel image dataset designed to assess the ability of large language models (LLMs) to interpret non-traditional, stylized imagery. Unlike conventional photographic benchmarks, VisualQuest challenges models with images that incorporate abstract, symbolic, and metaphorical elements, requiring the integration of domain-specific knowledge and advanced reasoning. The dataset was meticulously curated through multiple stages of filtering, annotation, and standardization to ensure high quality and diversity. Our evaluations using several state-of-the-art multimodal LLMs reveal significant performance variations that underscore the importance of both factual background knowledge and inferential capabilities in visual recognition tasks. VisualQuest thus provides a robust and comprehensive benchmark for advancing research in multimodal reasoning and model architecture design.

Tasks

Reproductions