SOTAVerified

STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos

2023-01-02ICCV 2023Code Available0· sign in to hype

Anshul Shah, Benjamin Lundell, Harpreet Sawhney, Rama Chellappa

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn discriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps, making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.

Tasks

Reproductions