The objective of this research is to develop and evaluate a context-aware Augmented Reality system which filters content based on the local context of the surgical instrument. We optically track positions of the patient and the instrument and interpret this data to recognize the phase of the operation. Depending on the result, an appropriate visualization is generated and displayed. For the interpretation, we combine a rule-based, deductive approach and a case-based, inductive one. Both rely on a description-logic based ontology. In phantom experiments the system was used to support implant positioning in models of the mandible. It recognized the phase correctly and provided an appropriate visualization about 85% of the time. The knowledge-based concept for intraoperative assistance proved capable of generating useful visualizations in a timely manner. However, further work is necessary to improve accuracy and reduce the deviation from the actual and planned implant positions.
CITATION STYLE
Zheng, G., Liao, H., Jannin, P., Cattin, P., & Lee, S.-L. (2016). Erratum to: Medical Imaging and Augmented Reality (pp. E1–E1). https://doi.org/10.1007/978-3-319-43775-0_40
Mendeley helps you to discover research relevant for your work.