SOTAVerified

How good are variational autoencoders at transfer learning?

2023-04-21Code Available0· sign in to hype

Lisa Bonheme, Marek Grzes

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Variational autoencoders (VAEs) are used for transfer learning across various research domains such as music generation or medical image analysis. However, there is no principled way to assess before transfer which components to retrain or whether transfer learning is likely to help on a target task. We propose to explore this question through the lens of representational similarity. Specifically, using Centred Kernel Alignment (CKA) to evaluate the similarity of VAEs trained on different datasets, we show that encoders' representations are generic but decoders' specific. Based on these insights, we discuss the implications for selecting which components of a VAE to retrain and propose a method to visually assess whether transfer learning is likely to help on classification tasks.

Tasks

Reproductions