A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine

  • Sonntag D
  • Schulz C
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The dialogue components we developed provide the infrastructure of the disseminated industrial prototype RadSpeech—a semantic speech dialogue system for radiologists. The major contribution of this paper is the description of a new speech-based interaction scenario of RadSpeech where two radiologists use two independent but related mobile speech devices (iPad and iPhone) and collaborate via a connected large screen installation using related speech commands. With traditional user interfaces, users may browse or explore patient data, but little to no help is given when it comes to structuring the collaborative user input and annotate radiology images in real-time with ontology-based medical annotations. A distinctive feature is that the interaction design includes the screens of the mobile devices for touch screen interaction for more complex tasks rather than the simpler ones such as a mere remote control of the image display on the large screen.

Cite

CITATION STYLE

APA

Sonntag, D., & Schulz, C. (2014). A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine. In Natural Interaction with Robots, Knowbots and Smartphones (pp. 37–47). Springer New York. https://doi.org/10.1007/978-1-4614-8280-2_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free