SOTAVerified

TokenHPE: Learning Orientation Tokens for Efficient Head Pose Estimation via Transformers

2023-01-01CVPR 2023Code Available1· sign in to hype

Cheng Zhang, Hai Liu, Yongjian Deng, Bochen Xie, Youfu Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Head pose estimation (HPE) has been widely used in the fields of human machine interaction, self-driving, and attention estimation. However, existing methods cannot deal with extreme head pose randomness and serious occlusions. To address these challenges, we identify three cues from head images, namely, neighborhood similarities, significant facial changes, and critical minority relationships. To leverage the observed findings, we propose a novel critical minority relationship-aware method based on the Transformer architecture in which the facial part relationships can be learned. Specifically, we design several orientation tokens to explicitly encode the basic orientation regions. Meanwhile, a novel token guide multi-loss function is designed to guide the orientation tokens as they learn the desired regional similarities and relationships. We evaluate the proposed method on three challenging benchmark HPE datasets. Experiments show that our method achieves better performance compared with state-of-the-art methods. Our code is publicly available at https://github.com/zc2023/TokenHPE.

Tasks

Reproductions