Timeception for Complex Action Recognition
Noureldien Hussein, Efstratios Gavves, Arnold W. M. Smeulders
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/noureldien/timeceptionOfficialIn paperpytorch★ 0
- github.com/CMU-CREATE-Lab/deep-smoke-machinepytorch★ 126
- github.com/QUVA-Lab/timeceptionpytorch★ 0
Abstract
This paper focuses on the temporal aspect for recognizing human activities in videos; an important visual cue that has long been undervalued. We revisit the conventional definition of activity and restrict it to Complex Action: a set of one-actions with a weak temporal pattern that serves a specific purpose. Related works use spatiotemporal 3D convolutions with fixed kernel size, too rigid to capture the varieties in temporal extents of complex actions, and too short for long-range temporal modeling. In contrast, we use multi-scale temporal convolutions, and we reduce the complexity of 3D convolutions. The outcome is Timeception convolution layers, which reasons about minute-long temporal patterns, a factor of 8 longer than best related works. As a result, Timeception achieves impressive accuracy in recognizing the human activities of Charades, Breakfast Actions, and MultiTHUMOS. Further, we demonstrate that Timeception learns long-range temporal dependencies and tolerate temporal extents of complex actions.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Breakfast | Timeception | Accuracy (%) | 71.3 | — | Unverified |