Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia

14Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a scalable multimodal dialog platform for the remote digital assessment and monitoring of schizophrenia. Patients diagnosed with schizophrenia and healthy controls interacted with Tina, a virtual conversational agent, as she guided them through a brief set of structured tasks, while their speech and facial video was streamed in real-time to a back-end analytics module. Patients were concurrently assessed by trained raters on validated clinical scales. We find that multiple speech and facial biomarkers extracted from these data streams show significant differences (as measured by effect sizes) between patients and controls, and furthermore, machine learning models built on such features can classify patients and controls with high sensitivity and specificity. We further investigate, using correlation analysis between the extracted metrics and standardized clinical scales for the assessment of schizophrenia symptoms, how such speech and facial biomarkers can provide further insight into schizophrenia symptomatology.

Cite

CITATION STYLE

APA

Richter, V., Neumann, M., Kothare, H., Roesler, O., Liscombe, J., Suendermann-Oeft, D., … Ramanarayanan, V. (2022). Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia. In ACM International Conference Proceeding Series (pp. 171–176). Association for Computing Machinery. https://doi.org/10.1145/3536220.3558075

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free