SOTAVerified

H2O: Two Hands Manipulating Objects for First Person Interaction Recognition

2021-04-22ICCV 2021Unverified0· sign in to hype

Taein Kwon, Bugra Tekin, Jan Stuhmer, Federica Bogo, Marc Pollefeys

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a comprehensive framework for egocentric interaction recognition using markerless 3D annotations of two hands manipulating objects. To this end, we propose a method to create a unified dataset for egocentric 3D interaction recognition. Our method produces annotations of the 3D pose of two hands and the 6D pose of the manipulated objects, along with their interaction labels for each frame. Our dataset, called H2O (2 Hands and Objects), provides synchronized multi-view RGB-D images, interaction labels, object classes, ground-truth 3D poses for left & right hands, 6D object poses, ground-truth camera poses, object meshes and scene point clouds. To the best of our knowledge, this is the first benchmark that enables the study of first-person actions with the use of the pose of both left and right hands manipulating objects and presents an unprecedented level of detail for egocentric 3D interaction recognition. We further propose the method to predict interaction classes by estimating the 3D pose of two hands and the 6D pose of the manipulated objects, jointly from RGB images. Our method models both inter- and intra-dependencies between both hands and objects by learning the topology of a graph convolutional network that predicts interactions. We show that our method facilitated by this dataset establishes a strong baseline for joint hand-object pose estimation and achieves state-of-the-art accuracy for first person interaction recognition.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
H2O (2 Hands and Objects)TA-GCNActions Top-179.25Unverified

Reproductions