SOTAVerified

Contrastive Loss is All You Need to Recover Analogies as Parallel Lines

2023-06-14Code Available0· sign in to hype

Narutatsu Ri, Fei-Tzin Lee, Nakul Verma

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.

Tasks

Reproductions