Listening with the Eyes: Benchmarking Egocentric Co-Speech Grounding across Space and Time
Weijie Zhou, Xuantang Xiong, Zhenlin Hu, Xiaomeng Zhu, Chaoyang Zhao, Honghui Dong, Zhengyou Zhang, Ming Tang, Jinqiao Wang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In situated collaboration, speakers often use intentionally underspecified deictic commands (e.g., ``pass me that''), whose referent becomes identifiable only by aligning speech with a brief co-speech pointing stroke. However, many embodied benchmarks admit language-only shortcuts, allowing MLLMs to perform well without learning the audio--visual alignment required by deictic interaction. To bridge this gap, we introduce Egocentric Co-Speech Grounding (EcoG), where grounding is executable only if an agent jointly predicts What, Where, and When. To operationalize this, we present EcoG-Bench, an evaluation-only bilingual (EN/ZH) diagnostic benchmark of 811 egocentric clips with dense spatial annotations and millisecond-level stroke supervision. It is organized under a Progressive Cognitive Evaluation protocol. Benchmarking state-of-the-art MLLMs reveals a severe executability gap: while human subjects achieve near-ceiling performance on EcoG-Bench (96.9\% strict Eco-Accuracy), the best native video-audio setting remains low (Gemini-3-Pro: 17.0\%). Moreover, in a diagnostic ablation, replacing the native video--audio interface with timestamped frame samples and externally verified ASR (with word-level timing) substantially improves the same model (17.0\%42.9\%). Overall, EcoG-Bench provides a strict, executable testbed for event-level speech--gesture binding, and suggests that multimodal interfaces may bottleneck the observability of temporal alignment cues, independently of model reasoning.