SOTAVerified

Treasure What You Have: Exploiting Similarity in Deep Neural Networks for Efficient Video Processing

2023-05-10Unverified0· sign in to hype

Hadjer Benmeziane, Halima Bouzidi, Hamza Ouarnoughi, Ozcan Ozturk, Smail Niar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep learning has enabled various Internet of Things (IoT) applications. Still, designing models with high accuracy and computational efficiency remains a significant challenge, especially in real-time video processing applications. Such applications exhibit high inter- and intra-frame redundancy, allowing further improvement. This paper proposes a similarity-aware training methodology that exploits data redundancy in video frames for efficient processing. Our approach introduces a per-layer regularization that enhances computation reuse by increasing the similarity of weights during training. We validate our methodology on two critical real-time applications, lane detection and scene parsing. We observe an average compression ratio of approximately 50% and a speedup of 1.5x for different models while maintaining the same accuracy.

Tasks

Reproductions