From Text to Speech: A Multimodal Cross-Domain Approach for Deception Detection

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deception detection -identifying when someone is trying to cause someone else to believe something that is not true- is a hard task for humans. The task is even harder for automatic approaches, that must deal with additional problems like the lack of enough labeled data. In this context, transfer learning in the form of cross-domain classification is a task that aims to leverage labeled data from certain domains for which labeled data is available to others for which data is scarce. This paper presents a study on the suitability of linguistic features for cross-domain deception detection on multimodal data. Specifically, we aim to learn models for deception detection across different domains of written texts (one modality) and apply the new knowledge to unrelated topics transcribed from spoken statements (another modality). Experimental results reveal that by using LIWC and POS n-grams we reach a in-modality accuracy of 69.42%, as well as an AUC ROC of 0.7153. When doing transfer learning, we achieve an accuracy of 63.64% and get an AUC ROC of 0.6351.

Cite

CITATION STYLE

APA

Rill-García, R., Villaseñor-Pineda, L., Reyes-Meza, V., & Escalante, H. J. (2019). From Text to Speech: A Multimodal Cross-Domain Approach for Deception Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11188 LNCS, pp. 164–177). Springer Verlag. https://doi.org/10.1007/978-3-030-05792-3_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free