SOTAVerified

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

2019-07-01ACL 2019Code Available0· sign in to hype

Goran Glava{\v{s}}, Ivan Vuli{\'c}

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet -- unlike existing retrofitting models -- it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20\% in accuracy) over state-of-the-art in cross-lingual LE detection.

Tasks

Reproductions