SOTAVerified

ASFormer: Transformer for Action Segmentation

2021-10-16Code Available1· sign in to hype

Fangqiu Yi, Hongyu Wen, Tingting Jiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Algorithms for the action segmentation task typically use temporal models to predict what action is occurring at each frame for a minute-long daily activity. Recent studies have shown the potential of Transformer in modeling the relations among elements in sequential data. However, there are several major concerns when directly applying the Transformer to the action segmentation task, such as the lack of inductive biases with small training sets, the deficit in processing long input sequence, and the limitation of the decoder architecture to utilize temporal relations among multiple action segments to refine the initial predictions. To address these concerns, we design an efficient Transformer-based model for action segmentation task, named ASFormer, with three distinctive characteristics: (i) We explicitly bring in the local connectivity inductive priors because of the high locality of features. It constrains the hypothesis space within a reliable scope, and is beneficial for the action segmentation task to learn a proper target function with small training sets. (ii) We apply a pre-defined hierarchical representation pattern that efficiently handles long input sequences. (iii) We carefully design the decoder to refine the initial predictions from the encoder. Extensive experiments on three public datasets demonstrate that effectiveness of our methods. Code is available at https://github.com/ChinaYi/ASFormer.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
50 SaladsASFormer+ASRFF1@50%79.3Unverified
50 SaladsASFormerF1@50%76Unverified
Assembly101ASFormerF1@10%33.4Unverified
BreakfastASFormerAverage F168Unverified
GTEAASFormerF1@50%79.2Unverified

Reproductions