SOTAVerified

Transformed ROIs for Capturing Visual Transformations in Videos

2021-06-06Unverified0· sign in to hype

Abhinav Rai, Fadime Sener, Angela Yao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Modeling the visual changes that an action brings to a scene is critical for video understanding. Currently, CNNs process one local neighbourhood at a time, thus contextual relationships over longer ranges, while still learnable, are indirect. We present TROI, a plug-and-play module for CNNs to reason between mid-level feature representations that are otherwise separated in space and time. The module relates localized visual entities such as hands and interacting objects and transforms their corresponding regions of interest directly in the feature maps of convolutional layers. With TROI, we achieve state-of-the-art action recognition results on the large-scale datasets Something-Something-V2 and EPIC-Kitchens-100.

Tasks

Reproductions