SOTAVerified

Probabilistic Embeddings with Laplacian Graph Priors

2022-03-25Unverified0· sign in to hype

Väinö Yrjänäinen, Måns Magnusson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce probabilistic embeddings using Laplacian priors (PELP). The proposed model enables incorporating graph side-information into static word embeddings. We theoretically show that the model unifies several previously proposed embedding methods under one umbrella. PELP generalises graph-enhanced, group, dynamic, and cross-lingual static word embeddings. PELP also enables any combination of these previous models in a straightforward fashion. Furthermore, we empirically show that our model matches the performance of previous models as special cases. In addition, we demonstrate its flexibility by applying it to the comparison of political sociolects over time. Finally, we provide code as a TensorFlow implementation enabling flexible estimation in different settings.

Tasks

Reproductions