SOTAVerified

Skeleton Image Representation for 3D Action Recognition based on Tree Structure and Reference Joints

2019-09-11Code Available0· sign in to hype

Carlos Caetano, François Brémond, William Robson Schwartz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In the last years, the computer vision research community has studied on how to model temporal dynamics in videos to employ 3D human action recognition. To that end, two main baseline approaches have been researched: (i) Recurrent Neural Networks (RNNs) with Long-Short Term Memory (LSTM); and (ii) skeleton image representations used as input to a Convolutional Neural Network (CNN). Although RNN approaches present excellent results, such methods lack the ability to efficiently learn the spatial relations between the skeleton joints. On the other hand, the representations used to feed CNN approaches present the advantage of having the natural ability of learning structural information from 2D arrays (i.e., they learn spatial relations from the skeleton joints). To further improve such representations, we introduce the Tree Structure Reference Joints Image (TSRJI), a novel skeleton image representation to be used as input to CNNs. The proposed representation has the advantage of combining the use of reference joints and a tree structure skeleton. While the former incorporates different spatial relationships between the joints, the latter preserves important spatial relations by traversing a skeleton tree with a depth-first order algorithm. Experimental results demonstrate the effectiveness of the proposed representation for 3D action recognition on two datasets achieving state-of-the-art results on the recent NTU RGB+D~120 dataset.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
NTU RGB+D 120TSRJIAccuracy (Cross-Setup)67.9Unverified

Reproductions