SOTAVerified

3D Human Pose Estimation in RGBD Images for Robotic Task Learning

2018-03-07Code Available0· sign in to hype

Christian Zimmermann, Tim Welschehold, Christian Dornhege, Wolfram Burgard, Thomas Brox

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth. Our approach builds on robust human keypoint detectors for color images and incorporates depth for lifting into 3D. We combine the system with our learning from demonstration framework to instruct a service robot without the need of markers. Experiments in real world settings demonstrate that our approach enables a PR2 robot to imitate manipulation actions observed from a human teacher.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Total CaptureROS node wrappingAverage MPJPE (mm)112Unverified

Reproductions