SOTAVerified

Consensus-based Optimization for 3D Human Pose Estimation in Camera Coordinates

2019-11-21Code Available0· sign in to hype

Diogo C. Luvizon, Hedi Tabia, David Picard

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

3D human pose estimation is frequently seen as the task of estimating 3D poses relative to the root body joint. Alternatively, we propose a 3D human pose estimation method in camera coordinates, which allows effective combination of 2D annotated data and 3D poses and a straightforward multi-view generalization. To that end, we cast the problem as a view frustum space pose estimation, where absolute depth prediction and joint relative depth estimations are disentangled. Final 3D predictions are obtained in camera coordinates by the inverse camera projection. Based on this, we also present a consensus-based optimization algorithm for multi-view predictions from uncalibrated images, which requires a single monocular training procedure. Although our method is indirectly tied to the training camera intrinsics, it still converges for cameras with different intrinsic parameters, resulting in coherent estimations up to a scale factor. Our method improves the state of the art on well known 3D human pose datasets, reducing the prediction error by 32% in the most common benchmark. We also reported our results in absolute pose position error, achieving 80~mm for monocular estimations and 51~mm for multi-view, on average.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Human3.6MPose Consensus (multi-view, GT calib.)Average MPJPE (mm)39Unverified
Human3.6MPose Consensus (multi-view, est. calib.)Average MPJPE (mm)45Unverified
Human3.6MPose Consensus (monocular)Average MPJPE (mm)52Unverified
MPI-INF-3DHPPose Consensus (monocular)MPJPE112.1Unverified

Reproductions