SOTAVerified

On the Correlation of Word Embedding Evaluation Metrics

2020-05-01LREC 2020Unverified0· sign in to hype

Fran{\c{c}}ois Torregrossa, Vincent Claveau, Nihel Kooli, Guillaume Gravier, Robin Allesiardo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Word embeddings intervene in a wide range of natural language processing tasks. These geometrical representations are easy to manipulate for automatic systems. Therefore, they quickly invaded all areas of language processing. While they surpass all predecessors, it is still not straightforward why and how they do so. In this article, we propose to investigate all kind of evaluation metrics on various datasets in order to discover how they correlate with each other. Those correlations lead to 1) a fast solution to select the best word embeddings among many others, 2) a new criterion that may improve the current state of static Euclidean word embeddings, and 3) a way to create a set of complementary datasets, i.e. each dataset quantifies a different aspect of word embeddings.

Tasks

Reproductions