Context-Dependent Sentiment Analysis in User-Generated Videos
2017-07-01ACL 2017Code Available0· sign in to hype
Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/senticnet/sc-lstmOfficialIn papernone★ 0
- github.com/soujanyaporia/multimodal-sentiment-analysistf★ 0
Abstract
Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10\% performance improvement over the state of the art and high robustness to generalizability.