SOTAVerified

On Efficient Neural Network Architectures for Image Compression

2024-06-14Code Available0· sign in to hype

Yichi Zhang, Zhihao Duan, Fengqing Zhu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advances in learning-based image compression typically come at the cost of high complexity. Designing computationally efficient architectures remains an open challenge. In this paper, we empirically investigate the impact of different network designs in terms of rate-distortion performance and computational complexity. Our experiments involve testing various transforms, including convolutional neural networks and transformers, as well as various context models, including hierarchical, channel-wise, and space-channel context models. Based on the results, we present a series of efficient models, the final model of which has comparable performance to recent best-performing methods but with significantly lower complexity. Extensive experiments provide insights into the design of architectures for learned image compression and potential direction for future research. The code is available at https://gitlab.com/viper-purdue/efficient-compression.

Tasks

Reproductions