EgoVITA: Learning to Plan and Verify for Egocentric Video Reasoning
Yogesh Kulkarni, Pooyan Fazli
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Egocentric video understanding requires procedural reasoning under partial observability and continuously shifting viewpoints. Current multimodal large language models (MLLMs) struggle with this setting, often generating plausible but visually inconsistent or weakly grounded responses. We introduce EgoVITA, a framework that decomposes egocentric video reasoning into a structured plan-then-verify process. The model first generates an egocentric plan: a causal sequence of anticipated actions from a first-person perspective. This plan is then evaluated by an exocentric verification stage that validates spatiotemporal and logical consistency from a third-person viewpoint. This decomposition enables cross-perspective feedback without requiring paired ego-exo supervision. To train this reasoning process, we adopt Group Relative Policy Optimization (GRPO) with two dense reward signals: one that aligns intermediate plan steps with future visual states and another that reinforces consistent third-person verification. EgoVITA achieves state-of-the-art performance on egocentric reasoning benchmarks, outperforming Qwen2.5-VL-7B by +7.7 on EgoBlind and +4.4 on EgoOrient, while maintaining strong generalization on exocentric video tasks with only 47k training samples.