Automating annotation of information-giving for analysis of clinical conversation

18Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Objective: Coding of clinical communication for finegrained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. Materials and methods: The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to informationrequesting ratio (briefly, information-giving ratio) and patient gender. Results: Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patientreported measures of communication quality (r=0.263-0.344). Discussion: The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.

Cite

CITATION STYLE

APA

Mayfield, E., Laws, M. B., Wilson, I. B., & Rosé, C. P. (2014). Automating annotation of information-giving for analysis of clinical conversation. Journal of the American Medical Informatics Association, 21(E2). https://doi.org/10.1136/amiajnl-2013-001898

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free