SOTAVerified

An Artificial Language Evaluation of Distributional Semantic Models

2017-08-01CONLL 2017Unverified0· sign in to hype

Fatemeh Torabi Asr, Michael Jones

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from abstractive count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language framework, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar.

Tasks

Reproductions