MSDD: A Multimodal Language Dateset for Stance Detection

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Stance Detection is the task of automatically determining whether the author of a text is positive, negative, or neutral towards a given target. Correct detecting stance is conducive to false news detection, claim validation, and argument search. Detecting stance from certain types of conversation, especially multimodal conversation is an interesting problem which has not been carefully explored. In social interaction, people usually express their stance on instance, which is produced in a multimodal manner, through the usage of words (text), gestures (video) and prosodic cues (audio). Stance detection is an established research area in NLP, but in a multimodal context it is an understudied area. In this paper, we present MSDD, a novel multimodal dataset for stance detection, to explore multimodal language for expressing stance in conversation. We conducted a series of experiments on MSDD, and the result shows that multimodal information indeed improves the dialogue stance detection to some extent, but the fusion of the multimodal language needs to be enhanced.

Cite

CITATION STYLE

APA

Hu, M., Liu, P., Wang, W., Zhang, H., & Lin, C. (2023). MSDD: A Multimodal Language Dateset for Stance Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13495 LNAI, pp. 112–124). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-28953-8_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free