SOTAVerified

Non-Linear Relational Information Probing in Word Embeddings

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Pre-trained word embeddings such as SkipGram and GloVe are known to contain a myriad of useful information about words. In this work, we use multilayer perceptrons (MLP) to probe the relational information contained in these word embeddings. Previous studies that use linear models on the analogy and relation induction tasks have shown that SkipGram generally outperforms GloVe, suggesting that SkipGram embeddings contain more relational information than GloVe embeddings. However, by using non-linear probe like MLP, our results instead suggest that GloVe embeddings contain more relational information than SkipGram embeddings, but a good amount of that is stored in a non-linear form and thus previous linear models failed to reveal that. Interpreting our relation probes using post-hoc analysis provides us with an explanation for this difference.

Tasks

Reproductions