Conditional Word Embedding and Hypothesis Testing via Bayes-by-Backprop
2018-10-01EMNLP 2018Unverified0· sign in to hype
Rujun Han, Michael Gill, Arthur Spirling, Kyunghyun Cho
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Conventional word embedding models do not leverage information from document meta-data, and they do not model uncertainty. We address these concerns with a model that incorporates document covariates to estimate conditional word embedding distributions. Our model allows for (a) hypothesis tests about the meanings of terms, (b) assessments as to whether a word is near or far from another conditioned on different covariate values, and (c) assessments as to whether estimated differences are statistically significant.