SOTAVerified

Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models

2024-06-12Code Available2· sign in to hype

Chun-Yi Kuan, Wei-Ping Huang, Hung-Yi Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large audio-language models (LALMs) enhance traditional large language models by integrating audio perception capabilities, allowing them to tackle audio-related tasks. Previous research has primarily focused on assessing the performance of LALMs across various tasks, yet overlooking their reliability, particularly concerning issues like object hallucination. In our study, we introduce methods to assess the extent of object hallucination of publicly available LALMs. Our findings reveal that LALMs are comparable to specialized audio captioning models in their understanding of audio content, but struggle to answer discriminative questions, specifically those requiring the identification of the presence of particular object sounds within an audio clip. This limitation highlights a critical weakness in current LALMs: their inadequate understanding of discriminative queries. Moreover, we explore the potential of prompt engineering to enhance LALMs' performance on discriminative questions.

Tasks

Reproductions