SOTAVerified

On reducing the correlation of bottleneck representations in Autoencoders

2021-03-04Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Image compression is an important image processing task. Recently, there has been more interest in using autoencoders (AEs) to solve this task. An AE has two goals: (i) compress the original input to a low-dimensional space, at the bottleneck of the network topology, using the encoder (ii) reconstruct the input from the representation at the bottleneck using the decoder. Both parts are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the variations in the input data required to reconstruct the input without persevering the redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pair-wise correlation of the neurons, which complements the standard reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. The proposed approach is tested using the MNIST dataset and leads to superior experimental results.

Tasks

Reproductions