Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in Multi-Turn Dialog

20Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching text labels of stickers with previous utterances. However, due to their large quantities, it is impractical to require text labels for the all stickers. Hence, in this paper, we propose to recommend an appropriate sticker to user based on multi-turn dialog context history without any external labels. Two main challenges are confronted in this task. One is to learn semantic meaning of stickers without corresponding text labels. Another challenge is to jointly model the candidate sticker with the multi-turn dialog context. To tackle these challenges, we propose a sticker response selector (SRS) model. Specifically, SRS first employs a convolutional based sticker image encoder and a self-attention based multi-turn dialog encoder to obtain the representation of stickers and utterances. Next, deep interaction network is proposed to conduct deep matching between the sticker with each utterance in the dialog history. SRS then learns the short-term and long-term dependency between all interaction results by a fusion network to output the the final matching score. To evaluate our proposed method, we collect a large-scale real-world dialog dataset with stickers from one of the most popular online chatting platform. Extensive experiments conducted on this dataset show that our model achieves the state-of-the-art performance for all commonly-used metrics. Experiments also verify the effectiveness of each component of SRS. To facilitate further research in sticker selection field, we release this dataset of 340K multi-turn dialog and sticker pairs1.

Cite

CITATION STYLE

APA

Gao, S., Chen, X., Liu, C., Liu, L., Zhao, D., & Yan, R. (2020). Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in Multi-Turn Dialog. In The Web Conference 2020 - Proceedings of the World Wide Web Conference, WWW 2020 (pp. 1138–1148). Association for Computing Machinery, Inc. https://doi.org/10.1145/3366423.3380191

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free