SOTAVerified

Hybrid Mutual Information Lower-bound Estimators for Representation Learning

2021-03-04ICLR Workshop Neural_Compression 2021Unverified0· sign in to hype

Abhishek Sinha, Jiaming Song, Stefano Ermon

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Self-supervised representation learning methods based on the principle of maximizing mutual information have been successful in unsupervised learning of visual representations. These approaches are low-variance mutual information lower bound estimators, yet the lack of distributional assumptions prevent them from learning certain important information such as texture. Estimators that are based on distributional assumptions bypass this issue with autoencoders but they tend to have worse performance on downstream classification. To this end, we consider a hybrid approach that incorporates both the distribution-free contrastive lower bound and the distribution-based autoencoder lower bound. We illustrate that with one set of representations, the hybrid approach is able to achieve good performance on multiple downstream tasks such as classification, reconstruction, and generation.

Tasks

Reproductions