SOTAVerified

Efficient Virtual View Selection for 3D Hand Pose Estimation

2022-03-29Code Available1· sign in to hype

Jian Cheng, Yanguang Wan, Dexin Zuo, Cuixia Ma, Jian Gu, Ping Tan, Hongan Wang, Xiaoming Deng, yinda zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

3D hand pose estimation from single depth is a fundamental problem in computer vision, and has wide applications.However, the existing methods still can not achieve satisfactory hand pose estimation results due to view variation and occlusion of human hand. In this paper, we propose a new virtual view selection and fusion module for 3D hand pose estimation from single depth.We propose to automatically select multiple virtual viewpoints for pose estimation and fuse the results of all and find this empirically delivers accurate and robust pose estimation. In order to select most effective virtual views for pose fusion, we evaluate the virtual views based on the confidence of virtual views using a light-weight network via network distillation. Experiments on three main benchmark datasets including NYU, ICVL and Hands2019 demonstrate that our method outperforms the state-of-the-arts on NYU and ICVL, and achieves very competitive performance on Hands2019-Task1, and our proposed virtual view selection and fusion module is both effective for 3D hand pose estimation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HANDS 2019Ours-15viewsAverage 3D Error12.51Unverified
ICVLOurs-15viewsError (mm)4.76Unverified
ICVL HandsVirtual View SelectionAverage 3D Error4.79Unverified
NYU HandsVirtual View SelectionAverage 3D Error6.4Unverified

Reproductions