SOTAVerified

One Objective for All Models --- Self-supervised Learning for Topic Models

2021-09-29Unverified0· sign in to hype

Zeping Luo, Cindy Weng, Shiyou Wu, Mo Zhou, Rong Ge

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Self-supervised learning has significantly improved the performance of many NLP tasks. In this paper, we highlight a key advantage of self-supervised learning - when applied to data generated by topic models, self-supervised learning can be oblivious to the specific model, and hence is less susceptible to model mis-specification. In particular, we prove that commonly used self-supervised objectives based on reconstruction or contrastive samples can both recover useful posterior information for general topic models. Empirically, we show that the same objectives can perform competitively against posterior inference using the correct model, while outperforming posterior inference using mis-specified model.

Tasks

Reproductions