Monitoring surgeon workload during robot-assisted surgery can guide allocation of task demands, adapt system interfaces, and assess the robotic system’s usability. Current practices for measuring cognitive load primarily rely on questionnaires that are subjective and disrupt surgical workflow. To address this limitation, a computational framework is demonstrated to predict user workload during telerobotic surgery. This framework leverages wireless sensors to monitor surgeons’ cognitive load and predict their cognitive states. Continuous data across multiple physiological modalities (e.g., heart rate variability, electrodermal, and electroencephalogram activity) were simultaneously recorded for twelve surgeons performing surgical skills tasks on the validated da Vinci Skills Simulator. These surgical tasks varied in difficulty levels, e.g., requiring varying visual processing demand and degree of fine motor control. Collected multimodal physiological signals were fused using independent component analysis, and the predicted results were compared to the ground-truth workload level. Results compared performance of different classifiers, sensor fusion schemes, and physiological modality (i.e., prediction with single vs. multiple modalities). It was found that our multisensor approach outperformed individual signals and can correctly predict cognitive workload levels 83.2% of the time during basic and complex surgical skills tasks.
CITATION STYLE
Zhou, T., Cha, J. S., Gonzalez, G., Wachs, J. P., Sundaram, C. P., & Yu, D. (2020). Multimodal Physiological Signals for Workload Prediction in Robot-assisted Surgery. ACM Transactions on Human-Robot Interaction, 9(2). https://doi.org/10.1145/3368589
Mendeley helps you to discover research relevant for your work.