SOTAVerified

Cascaded Dual Vision Transformer for Accurate Facial Landmark Detection

2024-11-08Code Available0· sign in to hype

Ziqiang Dang, Jianfang Li, Lin Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Facial landmark detection is a fundamental problem in computer vision for many downstream applications. This paper introduces a new facial landmark detector based on vision transformers, which consists of two unique designs: Dual Vision Transformer (D-ViT) and Long Skip Connections (LSC). Based on the observation that the channel dimension of feature maps essentially represents the linear bases of the heatmap space, we propose learning the interconnections between these linear bases to model the inherent geometric relations among landmarks via Channel-split ViT. We integrate such channel-split ViT into the standard vision transformer (i.e., spatial-split ViT), forming our Dual Vision Transformer to constitute the prediction blocks. We also suggest using long skip connections to deliver low-level image features to all prediction blocks, thereby preventing useful information from being discarded by intermediate supervision. Extensive experiments are conducted to evaluate the performance of our proposal on the widely used benchmarks, i.e., WFLW, COFW, and 300W, demonstrating that our model outperforms the previous SOTAs across all three benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
300WD-ViTNME2.85Unverified
COFWD-ViTNME (inter-pupil)4.13Unverified
WFLWD-ViTNME3.75Unverified

Reproductions