SOTAVerified

Multi-View Learning of Word Embeddings via CCA

2011-12-01NeurIPS 2011Unverified0· sign in to hype

Paramveer Dhillon, Dean P. Foster, Lyle H. Ungar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-of-the-art performance on named entity recognition (NER) and chunking problems.

Tasks

Reproductions