Mesh Graphormer
Kevin Lin, Lijuan Wang, Zicheng Liu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/meshgraphormerOfficialIn papernone★ 425
- github.com/MS-P3/code5/tree/main/graphormermindspore★ 0
- github.com/2024-MindSpore-1/Code2/tree/main/model-1/graphormermindspore★ 0
Abstract
We present a graph-convolution-reinforced transformer, named Mesh Graphormer, for 3D human pose and mesh reconstruction from a single image. Recently both transformers and graph convolutional neural networks (GCNNs) have shown promising progress in human mesh reconstruction. Transformer-based approaches are effective in modeling non-local interactions among 3D mesh vertices and body joints, whereas GCNNs are good at exploiting neighborhood vertex interactions based on a pre-specified mesh topology. In this paper, we study how to combine graph convolutions and self-attentions in a transformer to model both local and global interactions. Experimental results show that our proposed method, Mesh Graphormer, significantly outperforms the previous state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and FreiHAND datasets. Code and pre-trained models are available at https://github.com/microsoft/MeshGraphormer
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| FreiHAND | MeshGraphormer | PA-MPJPE | 5.9 | — | Unverified |
| HInt: Hand Interactions in the wild | MeshGraphormer | PCK@0.05 (New Days) All | 16.8 | — | Unverified |