SOTAVerified

OmniPose: A Multi-Scale Framework for Multi-Person Pose Estimation

2021-03-18Code Available1· sign in to hype

Bruno Artacho, Andreas Savakis

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose OmniPose, a single-pass, end-to-end trainable framework, that achieves state-of-the-art results for multi-person pose estimation. Using a novel waterfall module, the OmniPose architecture leverages multi-scale feature representations that increase the effectiveness of backbone feature extractors, without the need for post-processing. OmniPose incorporates contextual information across scales and joint localization with Gaussian heatmap modulation at the multi-scale feature extractor to estimate human pose with state-of-the-art accuracy. The multi-scale representations, obtained by the improved waterfall module in OmniPose, leverage the efficiency of progressive filtering in the cascade architecture, while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Our results on multiple datasets demonstrate that OmniPose, with an improved HRNet backbone and waterfall module, is a robust and efficient architecture for multi-person pose estimation that achieves state-of-the-art results.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO (Common Objects in Context)OmniPose (WASPv2)AP79.5Unverified
COCO test-devOmniPose (WASPv2)AP76.4Unverified
Leeds Sports PosesOmniPosePCK99.5Unverified
MPIIOmniPose (WASPv2)PCKh@0.292.3Unverified
UPenn ActionOmniPoseMean PCK@0.299.4Unverified

Reproductions