Re-entry Prediction for Online Conversations via Self-Supervised Learning

4Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

In recent years, world business in online discussions and opinion sharing on social media is booming. Re-entry prediction task is thus proposed to help people keep track of the discussions which they wish to continue. Nevertheless, existing works only focus on exploiting chatting history and context information, and ignore the potential useful learning signals underlying conversation data, such as conversation thread patterns and repeated engagement of target users, which help better understand the behavior of target users in conversations. In this paper, we propose three interesting and well-founded auxiliary tasks, namely, Spread Pattern, Repeated Target user, and Turn Authorship, as the self-supervised signals for re-entry prediction. These auxiliary tasks are trained together with the main task in a multi-task manner. Experimental results on two datasets newly collected from Twitter and Reddit show that our method outperforms the previous state-of-the-arts with fewer parameters and faster convergence. Extensive experiments and analysis show the effectiveness of our proposed models and also point out some key ideas in designing self-supervised tasks.

Cite

CITATION STYLE

APA

Wang, L., Zeng, X., Hu, H., Wong, K. F., & Jiang, D. (2021). Re-entry Prediction for Online Conversations via Self-Supervised Learning. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 2127–2137). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.183

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free