Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points
Dimitrios Tzionas, Abhilash Srikantha, Pablo Aponte, Juergen Gall
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/cvlabbonn/hands_3d_motion_viewerOfficialnone★ 0
- github.com/cvlabbonn/hand_2d_gt_viewernone★ 0
Abstract
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.