SOTAVerified

D3D-HOI: Dynamic 3D Human-Object Interactions from Videos

2021-08-19Code Available1· sign in to hype

Xiang Xu, Hanbyul Joo, Greg Mori, Manolis Savva

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce D3D-HOI: a dataset of monocular videos with ground truth annotations of 3D object pose, shape and part motion during human-object interactions. Our dataset consists of several common articulated objects captured from diverse real-world scenes and camera viewpoints. Each manipulated object (e.g., microwave oven) is represented with a matching 3D parametric model. This data allows us to evaluate the reconstruction quality of articulated objects and establish a benchmark for this challenging task. In particular, we leverage the estimated 3D human pose for more accurate inference of the object spatial layout and dynamics. We evaluate this approach on our dataset, demonstrating that human-object relations can significantly reduce the ambiguity of articulated object reconstructions from challenging real-world videos. Code and dataset are available at https://github.com/facebookresearch/d3d-hoi.

Tasks

Reproductions