SOTAVerified

QCS: Feature Refining from Quadruplet Cross Similarity for Facial Expression Recognition

2024-11-04Code Available1· sign in to hype

Chengpeng Wang, Li Chen, Lili Wang, Zhaofan Li, Xuebin Lv

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Facial expression recognition faces challenges where labeled significant features in datasets are mixed with unlabeled redundant ones. In this paper, we introduce Cross Similarity Attention (CSA) to mine richer intrinsic information from image pairs, overcoming a limitation when the Scaled Dot-Product Attention of ViT is directly applied to calculate the similarity between two different images. Based on CSA, we simultaneously minimize intra-class differences and maximize inter-class differences at the fine-grained feature level through interactions among multiple branches. Contrastive residual distillation is utilized to transfer the information learned in the cross module back to the base network. We ingeniously design a four-branch centrally symmetric network, named Quadruplet Cross Similarity (QCS), which alleviates gradient conflicts arising from the cross module and achieves balanced and stable training. It can adaptively extract discriminative features while isolating redundant ones. The cross-attention modules exist during training, and only one base branch is retained during inference, resulting in no increase in inference time. Extensive experiments show that our proposed method achieves state-of-the-art performance on several FER datasets.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AffectNetQCSAccuracy (8 emotion)64.4Unverified
FER+QCSAccuracy91.85Unverified
RAF-DBQCSOverall Accuracy93.02Unverified

Reproductions