SOTAVerified

An Unsupervised Neural Attention Model for Aspect Extraction

2017-07-01ACL 2017Code Available0· sign in to hype

Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.

Tasks

Reproductions