SOTAVerified

Learned transform compression with optimized entropy encoding

2021-04-07ICLR Workshop Neural_Compression 2021Code Available0· sign in to hype

Magda Gregorová, Marc Desaules, Alexandros Kalousis

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider the problem of learned transform compression where we learn both, the transform as well as the probability distribution over the discrete codes. We utilize a soft relaxation of the quantization operation to allow for back-propagation of gradients and employ vector (rather than scalar) quantization of the latent codes. Furthermore, we apply similar relaxation in the code probability assignments enabling direct optimization of the code entropy. To the best of our knowledge, this approach is completely novel. We conduct a set of proof-of concept experiments confirming the potency of our approaches.

Tasks

Reproductions