Abstract
Automatic deception detection is a challenging issue since human behaviors are too complex to establish any standard behavioral signs that would explicitly indicate that a person is lying. Furthermore, it is difficult to collect naturalistic datasets for supervised learning as both external and self-annotation may be unreliable for deception annotation. For these purposes, we collected the TRuLie dataset that consists of synchronously recorded videos (34 hours in total) and data received from contact photoplethysmography (PPG) and hardware eye-tracker of ninety three subjects who tried to feign innocence during interrogation after they committed mock crimes. Thus, we had multimodal fragments with lie (n=3380) and truth (n=6444). We trained an end-to-end convolutional neural network (CNN) on this dataset to predict lie and truth from audio and video, and also built classifiers on combined features extracted from video, audio, PPG, eye-tracker, and predictions from CNN. The best classifier (LightGBM) showed a mean balanced accuracy of 0.64 and an F1-score of 0.76 on a 5-fold cross-validation.
Author supplied keywords
Cite
CITATION STYLE
Karpova, V., Popenova, P., Glebko, N., Lyashenko, V., & Perepelkina, O. (2020). “was It You Who Stole 500 Rubles?” - The Multimodal Deception Detection. In ICMI 2020 Companion - Companion Publication of the 2020 International Conference on Multimodal Interaction (pp. 112–119). Association for Computing Machinery, Inc. https://doi.org/10.1145/3395035.3425638
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.