Long-term Temporal Convolutions for Action Recognition
Gül Varol, Ivan Laptev, Cordelia Schmid
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/gulvarol/ltctorch★ 0
Abstract
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HMDB-51 | LTC | Average accuracy of 3 splits | 64.8 | — | Unverified |
| UCF101 | LTC | 3-fold Accuracy | 91.7 | — | Unverified |