SOTAVerified

RACQUET: Unveiling the Dangers of Overlooked Referential Ambiguity in Visual LLMs

2024-12-18Code Available0· sign in to hype

Alberto Testoni, Barbara Plank, Raquel Fernández

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Ambiguity resolution is key to effective communication. While humans effortlessly address ambiguity through conversational grounding strategies, the extent to which current language models can emulate these strategies remains unclear. In this work, we examine referential ambiguity in image-based question answering by introducing RACQUET, a carefully curated dataset targeting distinct aspects of ambiguity. Through a series of evaluations, we reveal significant limitations and problems of overconfidence of state-of-the-art large multimodal language models in addressing ambiguity in their responses. The overconfidence issue becomes particularly relevant for RACQUET-BIAS, a subset designed to analyze a critical yet underexplored problem: failing to address ambiguity leads to stereotypical, socially biased responses. Our results underscore the urgency of equipping models with robust strategies to deal with uncertainty without resorting to undesirable stereotypes.

Tasks

Reproductions