SOTAVerified

TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation

2022-06-14Code Available1· sign in to hype

Mohammad Rezaei, Razieh Rastgoo, Vassilis Athitsos

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

3D hand pose estimation methods have made significant progress recently. However, the estimation accuracy is often far from sufficient for specific real-world applications, and thus there is significant room for improvement. This paper proposes TriHorn-Net, a novel model that uses specific innovations to improve hand pose estimation accuracy on depth images. The first innovation is the decomposition of the 3D hand pose estimation into the estimation of 2D joint locations in the depth image space (UV), and the estimation of their corresponding depths aided by two complementary attention maps. This decomposition prevents depth estimation, which is a more difficult task, from interfering with the UV estimations at both the prediction and feature levels. The second innovation is PixDropout, which is, to the best of our knowledge, the first appearance-based data augmentation method for hand depth images. Experimental results demonstrate that the proposed model outperforms the state-of-the-art methods on three public benchmark datasets. Our implementation is available at https://github.com/mrezaei92/TriHorn-Net.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ICVL HandsTriHorn-NetAverage 3D Error5.73Unverified
MSRA HandsTriHorn-NetAverage 3D Error7.13Unverified
NYU HandsTriHorn-NetAverage 3D Error7.68Unverified

Reproductions