SOTAVerified

MobRecon: Mobile-Friendly Hand Mesh Reconstruction from Monocular Image

2021-12-06CVPR 2022Code Available1· sign in to hype

Xingyu Chen, Yufeng Liu, Yajiao Dong, Xiong Zhang, Chongyang Ma, Yanmin Xiong, Yuan Zhang, Xiaoyan Guo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we propose a framework for single-view hand mesh reconstruction, which can simultaneously achieve high reconstruction accuracy, fast inference speed, and temporal coherence. Specifically, for 2D encoding, we propose lightweight yet effective stacked structures. Regarding 3D decoding, we provide an efficient graph operator, namely depth-separable spiral convolution. Moreover, we present a novel feature lifting module for bridging the gap between 2D and 3D representations. This module begins with a map-based position regression (MapReg) block to integrate the merits of both heatmap encoding and position regression paradigms for improved 2D accuracy and temporal coherence. Furthermore, MapReg is followed by pose pooling and pose-to-vertex lifting approaches, which transform 2D pose encodings to semantic features of 3D vertices. Overall, our hand reconstruction framework, called MobRecon, comprises affordable computational costs and miniature model size, which reaches a high inference speed of 83FPS on Apple A14 CPU. Extensive experiments on popular datasets such as FreiHAND, RHD, and HO3Dv2 demonstrate that our MobRecon achieves superior performance on reconstruction accuracy and temporal coherence. Our code is publicly available at https://github.com/SeanChenxy/HandMesh.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DexYCBMobReconAverage MPJPE (mm)14.2Unverified
FreiHANDMobReconPA-MPJPE5.7Unverified
HO-3D v2MobReconPA-MPJPE (mm)9.2Unverified

Reproductions