SOTAVerified

Quantifying Context Overlap for Training Word Embeddings

2018-10-01EMNLP 2018Unverified0· sign in to hype

Yimeng Zhuang, Jinghui Xie, Yinhe Zheng, Xuan Zhu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most models for learning word embeddings are trained based on the context information of words, more precisely first order co-occurrence relations. In this paper, a metric is designed to estimate second order co-occurrence relations based on context overlap. The estimated values are further used as the augmented data to enhance the learning of word embeddings by joint training with existing neural word embedding models. Experimental results show that better word vectors can be obtained for word similarity tasks and some downstream NLP tasks by the enhanced approach.

Tasks

Reproductions