SOTAVerified

Overcoming Language Variation in Sentiment Analysis with Social Attention

2015-11-19TACL 2017Code Available0· sign in to hype

Yi Yang, Jacob Eisenstein

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random, it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author's position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data.

Tasks

Reproductions