Medical Imaging and Augmented Reality

  • Katić D
  • Sudra G
  • Speidel S
  • et al.
ISSN: 0302-9743
N/ACitations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

The objective of this research is to develop and evaluate a context-aware Augmented Reality system which filters content based on the local context of the surgical instrument. We optically track positions of the patient and the instrument and interpret this data to recognize the phase of the operation. Depending on the result, an appropriate visualization is generated and displayed. For the interpretation, we combine a rule-based, deductive approach and a case-based, inductive one. Both rely on a description-logic based ontology. In phantom experiments the system was used to support implant positioning in models of the mandible. It recognized the phase correctly and provided an appropriate visualization about 85% of the time. The knowledge-based concept for intraoperative assistance proved capable of generating useful visualizations in a timely manner. However, further work is necessary to improve accuracy and reduce the deviation from the actual and planned implant positions.

Cite

CITATION STYLE

APA

Katić, D., Sudra, G., Speidel, S., Castrillon-Oberndorfer, G., Eggers, G., & Dillmann, R. (2010). Medical Imaging and Augmented Reality. Medical Imaging and Augmented Reality (Vol. 6326, pp. 531–540). Retrieved from http://www.springerlink.com/index/10.1007/978-3-642-15699-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free