Event Neural Networks
Matthew Dutson, Yin Li, Mohit Gupta
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/WISION-Lab/event-nn-tfOfficialtf★ 0
- github.com/WISION-Lab/event-nn-pytorchOfficialpytorch★ 0
Abstract
Video data is often repetitive; for example, the contents of adjacent frames are usually strongly correlated. Such redundancy occurs at multiple levels of complexity, from low-level pixel values to textures and high-level semantics. We propose Event Neural Networks (EvNets), which leverage this redundancy to achieve considerable computation savings during video inference. A defining characteristic of EvNets is that each neuron has state variables that provide it with long-term memory, which allows low-cost, high-accuracy inference even in the presence of significant camera motion. We show that it is possible to transform a wide range of neural networks into EvNets without re-training. We demonstrate our method on state-of-the-art architectures for both high- and low-level visual processing, including pose recognition, object detection, optical flow, and image enhancement. We observe roughly an order-of-magnitude reduction in computational costs compared to conventional networks, with minimal reductions in model accuracy.