SOTAVerified

OSGNet @ Ego4D Episodic Memory Challenge 2025

2025-06-04Code Available1· sign in to hype

Yisen Feng, Haoyu Zhang, Qiaohui Chu, Meng Liu, Weili Guan, YaoWei Wang, Liqiang Nie

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this report, we present our champion solutions for the three egocentric video localization tracks of the Ego4D Episodic Memory Challenge at CVPR 2025. All tracks require precise localization of the interval within an untrimmed egocentric video. Previous unified video localization approaches often rely on late fusion strategies, which tend to yield suboptimal results. To address this, we adopt an early fusion-based video localization model to tackle all three tasks, aiming to enhance localization accuracy. Ultimately, our method achieved first place in the Natural Language Queries, Goal Step, and Moment Queries tracks, demonstrating its effectiveness. Our code can be found at https://github.com/Yisen-Feng/OSGNet.

Tasks

Reproductions