SOTAVerified

DeGCN: Deformable Graph Convolutional Networks for Skeleton-Based Action Recognition

2024-03-25IEEE Transactions on Image Processing 2024Code Available2· sign in to hype

Woomin Myung, Nan Su, Jing-Hao Xue, Guijin Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Graph convolutional networks (GCN) have recently been studied to exploit the graph topology of the human body for skeleton-based action recognition. However, most of these methods unfortunately aggregate messages via an inflexible pattern for various action samples, lacking the awareness of intra-class variety and the suitableness for skeleton sequences, which often contain redundant or even detrimental connections. In this paper, we propose a novel Deformable Graph Convolutional Network (DeGCN) to adaptively capture the most informative joints. The proposed DeGCN learns the deformable sampling locations on both spatial and temporal graphs, enabling the model to perceive discriminative receptive fields. Notably, considering human action is inherently continuous, the corresponding temporal features are defined in a continuous latent space. Furthermore, we design an innovative multi-branch framework, which not only strikes a better trade-off between accuracy and model size, but also elevates the effect of ensemble between the joint and bone modalities remarkably. Extensive experiments show that our proposed method achieves state-of-the-art performances on three widely used datasets, NTU RGB+D, NTU RGB+D 120, and NW-UCLA.

Tasks

Reproductions