SOTAVerified

Causality Matters: How Temporal Information Emerges in Video Language Models

2025-11-15Code Available0· sign in to hype

Yumeng Shi, Quanyu Long, Yin Wu, Wenya Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Video language models (VideoLMs) have made significant progress in multimodal understanding. However, temporal understanding, which involves identifying event order, duration, and relationships across time, still remains a core challenge. Prior works emphasize positional encodings (PEs) as a key mechanism for encoding temporal structure. Surprisingly, we find that removing or modifying PEs in video inputs yields minimal degradation in the performance of temporal understanding. In contrast, reversing the frame sequence while preserving the original PEs causes a substantial drop. To explain this behavior, we conduct substantial analysis experiments to trace how temporal information is integrated within the model. We uncover a causal information pathway: temporal cues are progressively synthesized through inter-frame attention, aggregated in the final frame, and subsequently integrated into the query tokens. This emergent mechanism shows that temporal reasoning emerges from inter-visual token interactions under the constraints of causal attention, which implicitly encodes temporal structure. Based on these insights, we propose two efficiency-oriented strategies: staged cross-modal attention and a temporal exit mechanism for early token truncation. Experiments on two benchmarks validate the effectiveness of both approaches. To the best of our knowledge, this is the first systematic study of video temporal understanding in VideoLMs, offering insights for future model improvement. Our code is available at https://github.com/ANDgate99/Causality-Matters .

Reproductions