SOTAVerified

PanopTOP: a framework for generating viewpoint-invariant human pose estimation datasets

2021-10-11ICCV 2021 2021Code Available0· sign in to hype

Nicola Garau, Giulia Martinelli, Piotr Bròdka, Niccolò Bisagno, Nicola Conci

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Human pose estimation (HPE) from RGB and depth images has recently experienced a push for viewpoint-invariant and scale-invariant pose retrieval methods. Current methods fail to generalize to unconventional viewpoints due to the lack of viewpoint-invariant data at training time. Existing datasets do not provide multiple-viewpoint observations and mostly focus on frontal views. In this work, we introduce PanopTOP, a fully automatic framework for the generation of semi-synthetic RGB and depth samples with 2D and 3D ground truth of pedestrian poses from multiple arbitrary viewpoints. Starting from the Panoptic Dataset [15], we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view. Finally, we provide baseline results and cross-validation tests for our dataset, demonstrating how it is possible to generalize from the semi-synthetic to the real-world domain. The dataset and the code will be made publicly available upon acceptance.

Tasks

Reproductions