SOTAVerified

Cascaded deep monocular 3D human pose estimation with evolutionary training data

2020-06-14CVPR 2020Code Available1· sign in to hype

Shichao Li, Lei Ke, Kevin Pratama, Yu-Wing Tai, Chi-Keung Tang, Kwang-Ting Cheng

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data. This paper proposes a novel data augmentation method that: (1) is scalable for synthesizing massive amount of training data (over 8 million valid 3D human poses with corresponding 2D projections) for training 2D-to-3D networks, (2) can effectively reduce dataset bias. Our method evolves a limited dataset to synthesize unseen 3D human skeletons based on a hierarchical human representation and heuristics inspired by prior knowledge. Extensive experiments show that our approach not only achieves state-of-the-art accuracy on the largest public benchmark, but also generalizes significantly better to unseen and rare poses. Code, pre-trained models and tools are available at this HTTPS URL.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Human3.6MTAG-NetAverage MPJPE (mm)50.9Unverified
MPI-INF-3DHPEvoSkeletonMPJPE99.7Unverified

Reproductions