SOTAVerified

Re-entry Prediction for Online Conversations via Self-Supervised Learning

2021-09-05Findings (EMNLP) 2021Code Available0· sign in to hype

Lingzhi Wang, Xingshan Zeng, Huang Hu, Kam-Fai Wong, Daxin Jiang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In recent years, world business in online discussions and opinion sharing on social media is booming. Re-entry prediction task is thus proposed to help people keep track of the discussions which they wish to continue. Nevertheless, existing works only focus on exploiting chatting history and context information, and ignore the potential useful learning signals underlying conversation data, such as conversation thread patterns and repeated engagement of target users, which help better understand the behavior of target users in conversations. In this paper, we propose three interesting and well-founded auxiliary tasks, namely, Spread Pattern, Repeated Target user, and Turn Authorship, as the self-supervised signals for re-entry prediction. These auxiliary tasks are trained together with the main task in a multi-task manner. Experimental results on two datasets newly collected from Twitter and Reddit show that our method outperforms the previous state-of-the-arts with fewer parameters and faster convergence. Extensive experiments and analysis show the effectiveness of our proposed models and also point out some key ideas in designing self-supervised tasks.

Tasks

Reproductions