SOTAVerified

Syntagmatic Word Embeddings for Unsupervised Learning of Selectional Preferences

2021-08-01ACL (RepL4NLP) 2021Code Available0· sign in to hype

Renjith P. Ravindran, Akshay Badola, Narayana Kavi Murthy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Selectional Preference (SP) captures the tendency of a word to semantically select other words to be in direct syntactic relation with it, and thus informs us about syntactic word configurations that are meaningful. Therefore SP is a valuable resource for Natural Language Processing (NLP) systems and for semanticists. Learning SP has generally been seen as a supervised task, because it requires a parsed corpus as a source of syntactically related word pairs. In this paper we show that simple distributional analysis can learn a good amount of SP without the need for an annotated corpus. We extend the general word embedding technique with directional word context windows giving word representations that better capture syntagmatic relations. We test on the SP-10K dataset and demonstrate that syntagmatic embeddings outperform the paradigmatic embeddings. We also evaluate supervised version of these embeddings and show that unsupervised syntagmatic embeddings can be as good as supervised embeddings. We also make available the source code of our implementation.

Tasks

Reproductions