SOTAVerified

Self-Supervised Transfer Learning for Hand Mesh Recovery From Binocular Images

2021-01-01ICCV 2021Unverified0· sign in to hype

Zheng Chen, Sihan Wang, Yi Sun, Xiaohong Ma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Traditional methods for RGB hand mesh recovery usually need to train a separate model for each dataset with the corresponding ground truth and are hardly adapted to new scenarios without the ground truth for supervision. To address the problem, we propose a self-supervised framework for hand mesh estimation, where we pre-learn hand priors from existing hand datasets and transfer the priors to new scenarios without any landmark annotations. The proposed approach takes binocular images as input and mainly relies on left-right consistency constraints including appearance consensus and shape consistency to train the model to estimate the hand mesh in new scenarios. We conduct experiments on the widely used stereo hand dataset, and the experimental results verify that our model can get comparable performance compared with state-of-the-art methods even without the corresponding landmark annotations. To further evaluate our model, we collect a large real binocular dataset. The experimental results on the collected real dataset also verify the effectiveness of our model qualitatively.

Tasks

Reproductions