Developing portable context-aware multimodal applications for connected devices using the W3C multimodal architecture

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cue-me™ is one of the reference implementations of the W3C’s multimodal interaction (MMI) architecture and is a context-aware multimodal authoring and run-time platform that securely houses various modality components and facilitates cross-platform development of multimodal applications. It features several multimodal elements such as Face Recognition, Speech Recognition (ASR) and Synthesis (TTS), Digital annotations/gestures (Ink), Motion Sensing and EEG-headset based interactions that were developed using W3C MMI Architecture and Markup Languages. The MMI architecture described elsewhere in this volume facilitates single-authoring of multimodal applications and shields the developers from the nuances of the implementation of individual modality components or their distribution.

Cite

CITATION STYLE

APA

Tumuluri, R., & Kharidi, N. (2016). Developing portable context-aware multimodal applications for connected devices using the W3C multimodal architecture. In Multimodal Interaction with W3C Standards: Toward Natural User Interfaces to Everything (pp. 173–211). Springer International Publishing. https://doi.org/10.1007/978-3-319-42816-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free