SOTAVerified

CANAMRF: An Attention-Based Model for Multimodal Depression Detection

2024-01-04Unverified0· sign in to hype

Yuntao Wei, Yuzhe Zhang, Shuyang Zhang, Hong Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Multimodal depression detection is an important research topic that aims to predict human mental states using multimodal data. Previous methods treat different modalities equally and fuse each modality by na\"ive mathematical operations without measuring the relative importance between them, which cannot obtain well-performed multimodal representations for downstream depression tasks. In order to tackle the aforementioned concern, we present a Cross-modal Attention Network with Adaptive Multi-modal Recurrent Fusion (CANAMRF) for multimodal depression detection. CANAMRF is constructed by a multimodal feature extractor, an Adaptive Multimodal Recurrent Fusion module, and a Hybrid Attention Module. Through experimentation on two benchmark datasets, CANAMRF demonstrates state-of-the-art performance, underscoring the effectiveness of our proposed approach.

Tasks

Reproductions