SOTAVerified

SentiX: A Sentiment-Aware Pre-Trained Model for Cross-Domain Sentiment Analysis

2020-12-01COLING 2020Code Available1· sign in to hype

Jie zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, Liang He

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pre-trained language models have been widely applied to cross-domain NLP tasks like sentiment analysis, achieving state-of-the-art performance. However, due to the variety of users' emotional expressions across domains, fine-tuning the pre-trained models on the source domain tends to overfit, leading to inferior results on the target domain. In this paper, we pre-train a sentiment-aware language model (SentiX) via domain-invariant sentiment knowledge from large-scale review datasets, and utilize it for cross-domain sentiment analysis task without fine-tuning. We propose several pre-training tasks based on existing lexicons and annotations at both token and sentence levels, such as emoticons, sentiment words, and ratings, without human interference. A series of experiments are conducted and the results indicate the great advantages of our model. We obtain new state-of-the-art results in all the cross-domain sentiment analysis tasks, and our proposed SentiX can be trained with only 1\% samples (18 samples) and it achieves better performance than BERT with 90\% samples.

Tasks

Reproductions