Telementoring generalist surgeons as they treat patients can be essential when in situ expertise is not readily available. However, adverse cyber-attacks, unreliable network conditions, and remote mentors' predisposition can significantly jeopardize the remote intervention. To provide medical practitioners with guidance when mentors are unavailable, we present the AI-Medic, the initial steps towards the development of a multimodal intelligent artificial system for autonomous medical mentoring. The system uses a tablet device to acquire the view of an operating field. This imagery is provided to an encoder-decoder neural network trained to predict medical instructions from the current view of a surgery. The network was training using DAISI, a dataset including images and instructions providing step-by-step demonstrations of surgical procedures. The predicted medical instructions are conveyed to the user via visual and auditory modalities.
CITATION STYLE
Rojas-Muñoz, E., Couperus, K., & Wachs, J. P. (2020). The AI-Medic: A Multimodal Artificial Intelligent Mentor for Trauma Surgery. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 766–767). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3421167
Mendeley helps you to discover research relevant for your work.