Abstract
Behavior change is one of the most important goals in psychotherapy. This study focuses on Motivational Interviewing (MI), which is collaborative communication aimed at eliciting the client's own reasons for behavior change. To investigate the effectiveness of facial information in modeling MI, we collected an MI encounter corpus with speech and video data in the nutrition and fitness domains and annotated client utterances using the Manual for the Motivational Interviewing Skill Code (MISC). By analyzing client answers to the questions after the session, we found that clients who expressed more Change Talk were more motivated to change their behavior than those who expressed less Change Talk. We then proposed RNN-based multimodal models to detect Change Talk by setting a 2-class classification task: "Change Talk"and "not Change Talk."Our experiment showed that the best performing model was a multimodal BiLSTM model that fused language and client facial information. We also found that fusing language and facial information as context achieved better performance than the unimodal and no-context models. Moreover, we discuss the label imbalance problem and conduct an additional analysis using turns as a unit of analysis. As a result, our best model reached F1-score of 0.65 for Change Talk detection.
Author supplied keywords
Cite
CITATION STYLE
Nakano, Y. I., Hirose, E., Sakato, T., Okada, S., & Martin, J. C. (2022). Detecting Change Talk in Motivational Interviewing using Verbal and Facial Information. In ACM International Conference Proceeding Series (pp. 5–14). Association for Computing Machinery. https://doi.org/10.1145/3536221.3556607
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.