An estimated 20% of patients admitted to hospital wards are affected by delirium. Early detection is recommended to treat underlying causes of delirium, however workforce strain in general wards often causes it to remain undetected. This work proposes a robotic implementation of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) to aid early detection of delirium. Interactive features of the assessment are performed by Human-robot Interaction while a Transformer-based deep learning model predicts the Richmond Agitation Sedation Scale (RASS) level of the patient from image sequences; thermal imaging is used to maintain patient anonymity. A user study involving 18 participants role-playing each of alert, agitated, and sedated levels of the RASS is performed to test the HRI components and collect a dataset for deep learning. The HRI system achieved accuracies of 1.0 and 0.833 for the inattention and disorganised thinking features of the CAM-ICU, respectively, while the trained action recognition model achieved a mean accuracy of 0.852 on the classification of RASS levels during cross-validation. The three features represent a complete set of capabilities for automated delirium detection using the CAM-ICU, and the results demonstrate the feasibility of real-world deployment in hospital general wards.
CITATION STYLE
Jeffcock, J., Hansen, M., & Garate, V. R. (2023). Transformers and human-robot interaction for delirium detection. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 466–474). IEEE Computer Society. https://doi.org/10.1145/3568162.3576971
Mendeley helps you to discover research relevant for your work.