SOTAVerified

CMD-HAR: Cross-Modal Disentanglement for Wearable Human Activity Recognition

2025-03-27Unverified0· sign in to hype

Hanyu Liu, SiYao Li, Ying Yu, Yixuan Jiang, Hang Xiao, Jingxi Long, Haotian Tang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Human Activity Recognition (HAR) is a fundamental technology for numerous human - centered intelligent applications. Although deep learning methods have been utilized to accelerate feature extraction, issues such as multimodal data mixing, activity heterogeneity, and complex model deployment remain largely unresolved. The aim of this paper is to address issues such as multimodal data mixing, activity heterogeneity, and complex model deployment in sensor-based human activity recognition. We propose a spatiotemporal attention modal decomposition alignment fusion strategy to tackle the problem of the mixed distribution of sensor data. Key discriminative features of activities are captured through cross-modal spatio-temporal disentangled representation, and gradient modulation is combined to alleviate data heterogeneity. In addition, a wearable deployment simulation system is constructed. We conducted experiments on a large number of public datasets, demonstrating the effectiveness of the model.

Tasks

Reproductions